source_text
stringlengths
27
368
label
int64
0
1
target_text
stringlengths
1
5.38k
Evaluation was done automatically using the BLEU score and manually on fluency and adequacy.
0
The BLEU score has been shown to correlate well with human judgement, when statistical machine translation systems are compared (Doddington, 2002; Przybocki, 2004; Li, 2005).
This corpus has several advantages: it is annotated at different levels.
0
On the other hand, we are interested in the application of rhetorical analysis or ‘discourse parsing’ (3.2 and 3.3), in text generation (3.4), and in exploiting the corpus for the development of improved models of discourse structure (3.5).
NER is useful in many NLP applications such as information extraction, question answering, etc .Chieu and Ng have shown that the maximum entropy framework is able to use global information directly from various sources.
0
If somewhere else in the document we see “restrictions put in place by President Bush”, then we can be surer that Bush is a name.
This paper offers a broad insight into of Arabic constituency parsing by analyzing the interplay of linguistic phenomena, annotation choices, and model design.
0
Our analysis begins with a description of syntactic ambiguity in unvocalized MSA text (§2).
On the basis of that observation, they described a class of formalisms which they called Linear Context- Free Rewriting Systems (LCFRs), and showed they were recognizable in polynomial time and generated only semilinear languages.
0
MCTAG's are able to generate tee sets having dependent paths.
The manual evaluation of scoring translation on a graded scale from 1–5 seems to be very hard to perform.
0
Automatic scores are computed on a larger tested than manual scores (3064 sentences vs. 300–400 sentences). collected manual judgements, we do not necessarily have the same sentence judged for both systems (judges evaluate 5 systems out of the 8–10 participating systems).
In order to realize the full potential of dependency-based syntactic parsing, it is desirable to allow non-projective dependency structures.
0
Thus, most broad-coverage parsers based on dependency grammar have been restricted to projective structures.
The overall parsing accuracy obtained with the pseudo-projective approach is still lower than for the best projective parsers.
0
In the present study, we limit ourselves to an algorithmic approach, using a deterministic breadthfirst search.
The second algorithm builds on a boosting algorithm called AdaBoost.
0
Thus the method makes the fairly strong assumption that the features can be partitioned into two types such that each type alone is sufficient for classification.
Their method did not assume any knowledge about the target language, making it applicable to a wide array of resource-poor languages.
0
Our oracles took advantage of the labeled treebanks: While we tried to minimize the number of free parameters in our model, there are a few hyperparameters that need to be set.
Using less training data than other systems, their NER can perform as well as other state-of-the-art NERs.
0
For locations, tokens are matched against unigrams, and sequences of two consecutive tokens are matched against bigrams.
They proposed a single joint model for performing both morphological segmentation and syntactic disambiguation which bypasses the associated circularity.
0
In our model there are no such hyper-parameters, and the performance is the result of truly joint disambiguation. sults.
The texts were annotated with the RSTtool.
0
This is manifest in the lexical choices but 1 www.coli.unisb.de/∼thorsten/tnt/ Dagmar Ziegler is up to her neck in debt.
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance.
0
For a sequence of hanzi that is a possible name, we wish to assign a probability to that sequence qua name.
The resulting model is compact, efficiently learnable and linguistically expressive.
0
5 70.1 58.
The code is open source, has minimal dependencies, and offers both C++ and Java interfaces for integration.
0
The binary language model from Section 5.2 and text phrase table were forced into disk cache before each run.
This paper presents methods to query N-gram language models, minimizing time and space costs.
0
When a collision occurs, linear probing places the entry to be inserted in the next (higher index) empty bucket, wrapping around as necessary.
They proposed a single joint model for performing both morphological segmentation and syntactic disambiguation which bypasses the associated circularity.
0
Cohen and Smith approach this by introducing the α hyperparameter, which performs best when optimized independently for each sentence (cf.
This paper presents methods to query N-gram language models, minimizing time and space costs.
0
TPT Germann et al. (2009) describe tries with better locality properties, but did not release code.
This paper talks about KenLM: Faster and Smaller Language Model Queries.
0
The model was built with open vocabulary, modified Kneser-Ney smoothing, and default pruning settings that remove singletons of order 3 and higher.
They employed a PCFG-based generative framework to make both syntactic and morphological disambiguation decisions which were not only theoretically clean and linguistically justified but also probabilistically appropriate and empirically sound.
0
These words are in turn highly ambiguous, breaking the assumption underlying most parsers that the yield of a tree for a given sentence is known in advance.
Across eight European languages, their approach results in an average absolute improvement of 10.4% over a state-of-the-art baseline, and 16.7% over vanilla hidden Markov models induced with the Expectation Maximization algorithm.
0
Because we are interested in applying our techniques to languages for which no labeled resources are available, we paid particular attention to minimize the number of free parameters and used the same hyperparameters for all language pairs.
Evaluation was done automatically using the BLEU score and manually on fluency and adequacy.
0
While automatic measures are an invaluable tool for the day-to-day development of machine translation systems, they are only a imperfect substitute for human assessment of translation quality, or as the acronym BLEU puts it, a bilingual evaluation understudy.
They proposed an unsupervised method to discover paraphrases from a large untagged corpus.
0
In IE, creating the patterns which express the requested scenario, e.g. “management succession” or “corporate merger and acquisition” is regarded as the hardest task.
They proposed a single joint model for performing both morphological segmentation and syntactic disambiguation which bypasses the associated circularity.
0
In sequential tagging models such as (Adler and Elhadad, 2006; Bar-Haim et al., 2007; Smith et al., 2005) weights are assigned according to a language model The input for the joint task is a sequence W = w1, ... , wn of space-delimited tokens.
The PROBING data structure uses linear probing hash tables and is designed for speed.
0
The improvement is due to the cost of bit-level reads and avoiding reads that may fall in different virtual memory pages.
The authors believe automatic paraphrase discovery is an important component for building a fully automatic information extraction system.
0
In total, for the 2,000 NE category pairs, 5,184 keywords are found.
It is annotated with several data: morphology, syntax, rhetorical structure, connectors, correference and informative structure.
0
Commentaries argue in favor of a specific point of view toward some political issue, often dicussing yet dismissing other points of view; therefore, they typically offer a more interesting rhetorical structure than, say, narrative text or other portions of newspapers.
One can trivially create situations in which strictly binary-branching trees are combined to create a tree with only the root node and the terminal nodes, a completely flat structure.
0
Ties are rare in Bayes switching because the models are fine-grained — many estimated probabilities are involved in each decision.
The experimental tests are carried out on the Verbmobil task which is a limited-domain spoken-language task.
0
During the search process, a partial hypothesis is extended by choosing a source sentence position, which has not been aligned with a target sentence position yet.
Here we show how non-projective dependency parsing can be achieved by combining a data driven projective parser with special graph transformation techniques.
0
By contrast, when we turn to a comparison of the three encoding schemes it is hard to find any significant differences, and the overall impression is that it makes little or no difference which encoding scheme is used, as long as there is some indication of which words are assigned their linear head instead of their syntactic head by the projective parser.
They found replacing it with a ranked evaluation to be more suitable.
0
More judgements would have enabled us to make better distinctions, but it is not clear what the upper limit is.
Das and Petrov, in this paper, approached inducing unsupervised part-of-speech taggers for languages that had no labeled training data, but had translated text in a resource-rich language.
0
More recently, Subramanya et al. (2010) defined a graph over the cliques in an underlying structured prediction model.
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text.
0
This suggests that the backoff model is as reasonable a model as we can use in the absence of further information about the expected cost of a plural form.
The three parsers were trained and tuned by their creators on various sections of the WSJ portion of the Penn Treebank.
0
Each decision determines the inclusion or exclusion of a candidate constituent.
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models.
0
For a language like English, this problem is generally regarded as trivial since words are delimited in English text by whitespace or marks of punctuation.
They used graph-based label propagation for cross-lingual knowledge transfer and used the projected labels as features in an unsupervised model.
0
Our results outperform strong unsupervised baselines as well as approaches that rely on direct projections, and bridge the gap between purely supervised and unsupervised POS tagging models.
This topic has been getting more attention, driven by the needs of various NLP applications.
0
In general, different modalities (“planned to buy”, “agreed to buy”, “bought”) were considered to express the same relationship within an extraction setting.
They proposed a single joint model for performing both morphological segmentation and syntactic disambiguation which bypasses the associated circularity.
0
Such resources exist for Hebrew (Itai et al., 2006), but unfortunately use a tagging scheme which is incompatible with the one of the Hebrew Treebank.s For this reason, we use a data-driven morphological analyzer derived from the training data similar to (Cohen and Smith, 2007).
Their method did not assume any knowledge about the target language, making it applicable to a wide array of resource-poor languages.
1
Our method does not assume any knowledge about the target language (in particular no tagging dictionary is assumed), making it applicable to a wide array of resource-poor languages.
The PROBING data structure uses linear probing hash tables and is designed for speed.
0
We evaluate the time and memory consumption of each data structure by computing perplexity on 4 billion tokens from the English Gigaword corpus (Parker et al., 2009).
The resulting model is compact, efficiently learnable and linguistically expressive.
0
Our second point of comparison is with Grac¸a et al.
the "Potsdam Commentary Corpus" or PCC consists of 170 commentaries from Ma¨rkische Allgemeine Zeitung, a German regional daily.
0
rhetorical analysis We are experimenting with a hybrid statistical and knowledge-based system for discourse parsing and summarization (Stede 2003), (Hanneforth et al. 2003), again targeting the genre of commentaries.
There are clustering approaches that assign a single POS tag to each word type.
0
1 53.8 47.
The overall parsing accuracy obtained with the pseudo-projective approach is still lower than for the best projective parsers.
0
Projectivizing a dependency graph by lifting nonprojective arcs is a nondeterministic operation in the general case.
The TRIE model uses less memory than the smallest lossless alternative and is still faster than SRILM.
0
Sparse lookup is a key subproblem of language model queries.
It is probably the first analysis of Arabic parsing of this kind.
0
Presence of the determiner J Al. 2.
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.
0
The results clearly indicate increased variation in the ATB relative to the WSJ, but care should be taken in assessing the magnitude of the difference.
This paper talks about Unsupervised Models for Named Entity Classification.
0
, for A. T.&T. nonalpha.. .
The overall performance of their joint framework demonstrated that a probability distribution obtained over mere syntactic contexts using a Treebank grammar and a data-driven lexicon outperformed upper bounds proposed by previous joint disambiguation systems and achieved segmentation and parsing results on a par with state-of-the-art standalone applications results.
0
The additional morphological material in such cases appears after the stem and realizes the extended meaning.
Due to many similarly performing systems, the author was not able to draw strong conclusions on the question of correlation of manual and automatic evaluation metrics.
0
This is less than the 694 judgements 2004 DARPA/NIST evaluation, or the 532 judgements in the 2005 DARPA/NIST evaluation.
They showed better grammars to improve performance on both morphological and syntactic tasks, providing support for the advantage of a joint framework over pipelined or factorized ones.
0
We experimented with increasingly rich grammars read off of the treebank.
Explanations for this phenomenon are relative informativeness of lexicalization, insensitivity to morphology and the effect of variable word order and these factors lead to syntactic disambiguation.
0
Test set OOV rate is computed using the following splits: ATB (Chiang et al., 2006); CTB6 (Huang and Harper, 2009); Negra (Dubey and Keller, 2003); English, sections 221 (train) and section 23 (test).
The second algorithm builds on a boosting algorithm called AdaBoost.
0
Note that in our formalism a weakhypothesis can abstain.
They showed that it was useful to abstract away from the details of the formalism, and examine the nature of their derivation process as reflected by properties their trees, find that several of the formalisms considered can be seen as being closely related since they have derivation tree sets with the same structure as those produced by Context-Free Grammars.
0
Like HG's, TAG's, and MCTAG's, members of LCFRS can manipulate structures more complex than terminal strings and use composition operations that are more complex that concatenation.
The corpus was annoted with different linguitic information.
0
Here, annotation proceeds in two phases: first, the domains and the units of IS are marked as such.
Throughout this paper we compare with several packages: SRILM 1.5.12 (Stolcke, 2002) is a popular toolkit based on tries used in several decoders.
0
The model was built with open vocabulary, modified Kneser-Ney smoothing, and default pruning settings that remove singletons of order 3 and higher.
The authors show that PATB is similar to other tree-banks but that annotation consistency remains low.
0
A better approach would be to distin guish between these cases, possibly by drawing on the vast linguistic work on Arabic connectives (AlBatal, 1990).
In this paper, Ben and Riloff present a coreference resolver called BABAR that focuses on the use of contextual-role knowledge for coreference resolution.
0
For the disasters domain, 8245 texts were used for training and the 40 test documents contained 447 anaphoric links.
They have made use of local and global features to deal with the instances of same token in a document.
0
On the other hand, if it is seen as McCann Pte.
NER is useful in many NLP applications such as information extraction, question answering, etc .Chieu and Ng have shown that the maximum entropy framework is able to use global information directly from various sources.
0
(1998) have also used a maximum entropy classifier that uses already tagged entities to help tag other entities.
The authors show that PATB is similar to other tree-banks but that annotation consistency remains low.
0
With a human evaluation we also showed that ATB inter-annotator agreement remains low relative to the WSJ corpus.
The code is open source, has minimal dependencies, and offers both C++ and Java interfaces for integration.
0
SRILM’s compact variant has an incredibly expensive destructor, dwarfing the time it takes to perform translation, and so we also modified Moses to avoiding the destructor by calling exit instead of returning normally.
The use of global features has shown excellent result in the performance on MUC-6 and MUC-7 test data.
0
For all lists except locations, the lists are processed into a list of tokens (unigrams).
These clusters are computed using an SVD variant without relying on transitional structure.
0
The terms on the right-hand-side denote the type-level and token-level probability terms respectively.
The overall parsing accuracy obtained with the pseudo-projective approach is still lower than for the best projective parsers.
0
We then evaluate the approach in two steps.
Here both parametric and non-parametric models are explored.
0
The precision and recall measures (described in more detail in Section 3) used in evaluating Treebank parsing treat each constituent as a separate entity, a minimal unit of correctness.
A large number of rules are needed for coverage of the domain, suggesting that a fairly large number of labeled examples should be required to train a classifier.
0
Thus at each iteration the algorithm is forced to pick features for the location, person and organization in turn for the classifier being trained.
They showed the efficacy of graph-based label propagation for projecting part-of-speech information across languages.
0
A very small excerpt from an Italian-English graph is shown in Figure 1.
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs.
0
KS Function Ge nde r filters candidate if gender doesn’t agree.
This paper discusses the Potsdam Commentary Corpus, a corpus of german assembeled by potsdam university.
0
We will briefly discuss this point in Section 3.1.
Their empirical results demonstrate that the type-based tagger rivals state-of-the-art tag-level taggers which employ more sophisticated learning mechanisms to exploit similar constraints.
0
2 56.2 32.
Vijay-Shankar et all considered the structural descriptions produced by various grammatical formalisms in terms of the complexity of the paths and the relationship between paths in the sets of structural descriptions that each system can generate.
0
An IG can be viewed as a CFG in which each nonterminal is associated with a stack.
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement.
0
Next, we represent the input sentence as an unweighted finite-state acceptor (FSA) I over H. Let us assume the existence of a function Id, which takes as input an FSA A, and produces as output a transducer that maps all and only the strings of symbols accepted by A to themselves (Kaplan and Kay 1994).
While many systems had similar performance, the results offered interesting insights, especially, about the relative performance of statistical and rule-based systems.
0
This may be the sign of a maturing research environment.
The departure from the traditional token-based tagging approach allow them to explicitly capture type-level distributional properties of valid POS tag assignments as part of the model.
0
As is standard, we use a fixed constant K for the number of tagging states.
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names.
0
Arguably this consists of about three phonological words.
the "Potsdam Commentary Corpus" or PCC consists of 170 commentaries from Ma¨rkische Allgemeine Zeitung, a German regional daily.
0
Due to the dramatic fiscal situation in Brandenburg she now surprisingly withdrew legislation drafted more than a year ago, and suggested to decide on it not before 2003.
The PROBING data structure uses linear probing hash tables and is designed for speed.
0
With a good hash function, collisions of the full 64bit hash are exceedingly rare: one in 266 billion queries for our baseline model will falsely find a key not present.
The authors show that PATB is similar to other tree-banks but that annotation consistency remains low.
0
For each 13 Of course, this weighting makes the PCFG an improper distribution.
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text.
0
A moment's reflection will reveal that things are not quite that simple.
Instance-weighting approach improved over a wide range of baselines, giving gains of over 2 BLEU points over the best non-adapted baseline.
0
First, we learn weights on individual phrase pairs rather than sentences.
In this paper, Ben and Riloff present a coreference resolver called BABAR that focuses on the use of contextual-role knowledge for coreference resolution.
0
We generate these caseframes automatically by running AutoSlog over the training corpus exhaustively so that it literally generates a pattern to extract every noun phrase in the corpus.
This paper talks about Pseudo-Projective Dependency Parsing.
0
The overall parsing accuracy obtained with the pseudo-projective approach is still lower than for the best projective parsers.
It is probably the first analysis of Arabic parsing of this kind.
0
We are unaware of prior results for the Stanford parser.
This topic has been getting more attention, driven by the needs of various NLP applications.
0
However, the next step is clearly different.
Here we show how non-projective dependency parsing can be achieved by combining a data driven projective parser with special graph transformation techniques.
0
In principle, it would be possible to encode the exact position of the syntactic head in the label of the arc from the linear head, but this would give a potentially infinite set of arc labels and would make the training of the parser very hard.
It is annotated with several data: morphology, syntax, rhetorical structure, connectors, correference and informative structure.
0
Thus it is possible, for illustration, to look for a noun phrase (syntax tier) marked as topic (information structure tier) that is in a bridging relation (co-reference tier) to some other noun phrase.
This paper talks about Exploiting Diversity in Natural Language Processing: Combining Parsers.
0
Mi(c) is a binary function returning t when parser i (from among the k parsers) suggests constituent c should be in the parse.
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration.
0
The breakdown of the different types of words found by ST in the test corpus is given in Table 3.
This assumption, however, is not inherent to type-based tagging models.
0
The use of ILP in learning the desired grammar significantly increases the computational complexity of this method.
The experimental tests are carried out on the Verbmobil task which is a limited-domain spoken-language task.
0
The following recursive equation is evaluated: Qe0 (e; S; C; j) = (2) = p(fj je) max Æ;e00 np(jjj0; J) p(Æ) pÆ(eje0; e00) max (S0;j0) (S0 ;Cnfjg;j0)!(S;C;j) j02Cnfjg Qe00 (e0; S0;C n fjg; j0)o: The search ends in the hypotheses (I; f1; ; Jg; j).
This assumption, however, is not inherent to type-based tagging models.
0
9 65.5 46.
This paper talks about Unsupervised Models for Named Entity Classification.
0
Before describing the unsupervised case we first describe the supervised version of the algorithm: Input to the learning algorithm: n labeled examples of the form (xi, y„). y, is the label of the ith example (given that there are k possible labels, y, is a member of y = {1 ... 0). xi is a set of mi features {x,1, Xi2 .
This paper presents methods to query N-gram language models, minimizing time and space costs.
0
Hash tables are a common sparse mapping technique used by SRILM’s default and BerkeleyLM’s hashed variant.
Instance-weighting approach improved over a wide range of baselines, giving gains of over 2 BLEU points over the best non-adapted baseline.
0
The toplevel weights are trained to maximize a metric such as BLEU on a small development set of approximately 1000 sentence pairs.
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text.
0
There are clearly eight orthographic words in the example given, but if one were doing syntactic analysis one would probably want to consider I'm to consist of two syntactic words, namely I and am.
A beam search concept is applied as in speech recognition.
0
The computing time is low, since no reordering is carried out.
The overall performance of their joint framework demonstrated that a probability distribution obtained over mere syntactic contexts using a Treebank grammar and a data-driven lexicon outperformed upper bounds proposed by previous joint disambiguation systems and achieved segmentation and parsing results on a par with state-of-the-art standalone applications results.
0
This essentially means that a better grammar tunes the joint model for optimized syntactic disambiguation at least in as much as their hyper parameters do.
In this paper, the authors proposed an approach for instance-weighting phrase pairs in an out-of-domain corpus in order to improve in-domain performance.
0
We focus here instead on adapting the two most important features: the language model (LM), which estimates the probability p(wIh) of a target word w following an ngram h; and the translation models (TM) p(slt) and p(t1s), which give the probability of source phrase s translating to target phrase t, and vice versa.