source_text
stringlengths
27
368
label
int64
0
1
target_text
stringlengths
1
5.38k
It is well-known that English constituency parsing models do not generalize to other languages and treebanks.
0
We also mark all nodes that dominate an SVO configuration (containsSVO).
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration.
0
a classifier.
Nevertheless, only a part of this corpus (10 texts), which the authors name "core corpus", is annotated with all this information.
0
rhetorical analysis We are experimenting with a hybrid statistical and knowledge-based system for discourse parsing and summarization (Stede 2003), (Hanneforth et al. 2003), again targeting the genre of commentaries.
The approach has been successfully tested on the 8 000-word Verbmobil task.
0
In general, m; l; l0 6= fl1; l2; l3g and in line umber 3 and 4, l0 must be chosen not to violate the above reordering restriction.
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration.
0
We will evaluate various specific aspects of the segmentation, as well as the overall segmentation per­ formance.
The texts were annotated with the RSTtool.
0
There are still some open issues to be resolved with the format, but it represents a first step.
Due to many similarly performing systems, the author was not able to draw strong conclusions on the question of correlation of manual and automatic evaluation metrics.
0
The average fluency judgement per judge ranged from 2.33 to 3.67, the average adequacy judgement ranged from 2.56 to 4.13.
BABAR's performance in both domains of terrorism and natural disaster, and the contextual-role knowledge in pronouns have shown successful results.
0
Ex: Mr. Cristiani, president of the country ...
The authors show that PATB is similar to other tree-banks but that annotation consistency remains low.
0
07 95.
The contextual rules are restricted and may not be applicable to every example, but the spelling rules are generally applicable and should have good coverage.
0
The method shares some characteristics of the decision list algorithm presented in this paper.
Through parser combination we have reduced the precision error rate by 30% and the recall error rate by 6% compared to the best previously published result.
0
The entries in this table can be compared with those of Table 3 to see how the performance of the combining techniques degrades in the presence of an inferior parser.
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs.
0
The focus of our work is on the use of contextual role knowledge for coreference resolution.
They have made use of local and global features to deal with the instances of same token in a document.
0
Dictionaries: Due to the limited amount of training material, name dictionaries have been found to be useful in the named entity task.
The use of global features has shown excellent result in the performance on MUC-6 and MUC-7 test data.
0
Acronyms (ACRO): Words made up of all capitalized letters in the text zone will be stored as acronyms (e.g., IBM).
However, these approaches are ill-equipped to directly represent type-based constraints such as sparsity.
0
We consider two variants of Berg-Kirkpatrick et al.
Throughout this paper we compare with several packages: SRILM 1.5.12 (Stolcke, 2002) is a popular toolkit based on tries used in several decoders.
0
Compared with the widely- SRILM, our is 2.4 times as fast while using 57% of the mem- The structure is a trie with bit-level packing, sorted records, interpolation search, and optional quantization aimed lower memory consumption. simultaneously uses less memory than the smallest lossless baseline and less CPU than the baseline.
The code is open source, has minimal dependencies, and offers both C++ and Java interfaces for integration.
0
In addition to the optimizations specific to each datastructure described in Section 2, we implement several general optimizations for language modeling.
The problem of coreference resolution has received considerable attention, including theoretical discourse models and supervised machine learning systems.
0
BABAR uses the log-likelihood statistic (Dunning, 1993) to evaluate the strength of a co-occurrence relationship.
This paper talks about KenLM: Faster and Smaller Language Model Queries.
0
Raj and Whittaker (2003) show that integers in a trie implementation can be compressed substantially.
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers.
0
Thus, we feel fairly confident that for the examples we have considered from Gan's study a solution can be incorporated, or at least approximated, within a finite-state framework.
The second algorithm builds on a boosting algorithm called AdaBoost.
0
The weak hypothesis can abstain from predicting the label of an instance x by setting h(x) = 0.
These clusters are computed using an SVD variant without relying on transitional structure.
0
Even without features, but still using the tag prior, our median result is 52.0%, still significantly outperforming Grac¸a et al.
However, using the top-level semantic classes of WordNet proved to be problematic as the class distinctions are too coarse.
0
First, we parsed the training corpus, collected all the noun phrases, and looked up each head noun in WordNet (Miller, 1990).
Explanations for this phenomenon are relative informativeness of lexicalization, insensitivity to morphology and the effect of variable word order and these factors lead to syntactic disambiguation.
0
Aside from adding a simple rule to correct alif deletion caused by the preposition J, no other language-specific processing is performed.
The resulting model is compact, efficiently learnable and linguistically expressive.
0
1 61.2 43.
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names.
0
For eight judges, ranging k between 1 and 8 corresponded to a precision score range of 90% to 30%, meaning that there were relatively few words (30% of those found by the automatic segmenter) on which all judges agreed, whereas most of the words found by the segmenter were such that one human judge agreed.
They believe that global context is useful in most languages, as it is a natural tendency for authors to use abbreviations on entities already mentioned previously.
0
Among machine learning-based NERs, Identi- Finder has proven to be the best on the official MUC6 and MUC7 test data.
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words.
0
Table 2 shows these similarity measures.
They extended previous work on discriminative weighting by using a finer granularity, focusing on the properties of instances rather than corpus components, and used simpler training procedure.
0
We have not explored this strategy.
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words.
0
With regard to purely morphological phenomena, certain processes are not han­ dled elegantly within the current framework Any process involving reduplication, for instance, does not lend itself to modeling by finite-state techniques, since there is no way that finite-state networks can directly implement the copying operations required.
They found replacing it with a ranked evaluation to be more suitable.
0
(b) does the translation have the same meaning, including connotations?
The authors cluster NE instance pairs based on the words in the context using bag-of-words methods.
0
If the expression is a word or a short phrase (like “corporation” and “company”), it is called a “synonym”.
These clusters are computed using an SVD variant without relying on transitional structure.
0
Both parameters depend on a single hyperparameter α.
The main result is that the combined system can recover non-projective dependencies with a precision sufficient to give a significant improvement in overall parsing accuracy, especially with respect to the exact match criterion, leading to the best reported performance for robust non-projective parsing of Czech.
0
Pseudo-Projective Dependency Parsing
Across eight European languages, their approach results in an average absolute improvement of 10.4% over a state-of-the-art baseline, and 16.7% over vanilla hidden Markov models induced with the Expectation Maximization algorithm.
0
Taking the intersection of languages in these resources, and selecting languages with large amounts of parallel data, yields the following set of eight Indo-European languages: Danish, Dutch, German, Greek, Italian, Portuguese, Spanish and Swedish.
They showed the efficacy of graph-based label propagation for projecting part-of-speech information across languages.
0
To provide a thorough analysis, we evaluated three baselines and two oracles in addition to two variants of our graph-based approach.
It is probably the first analysis of Arabic parsing of this kind.
0
But we follow the more direct adaptation of Evalb suggested by Tsarfaty (2006), who viewed exact segmentation as the ultimate goal.
Each out-of-domain phrase pair was characterized by a set of simple features intended to reflect how useful it would be.
0
This significantly underperforms log-linear combination.
This topic has been getting more attention, driven by the needs of various NLP applications.
0
Sometimes extracted phrases by themselves are not meaningful to consider without context, but we set the following criteria.
The PROBING data structure uses linear probing hash tables and is designed for speed.
0
We indicate whether a context with zero log backoff will extend using the sign bit: +0.0 for contexts that extend and −0.0 for contexts that do not extend.
The approach has been successfully tested on the 8 000-word Verbmobil task.
0
In this case, we have no finite-state restrictions for the search space.
The three parsers were trained and tuned by their creators on various sections of the WSJ portion of the Penn Treebank.
0
F-measure is the harmonic mean of precision and recall, 2PR/(P + R).
This paper offers a broad insight into of Arabic constituency parsing by analyzing the interplay of linguistic phenomena, annotation choices, and model design.
0
(2006).
Their work is closely related to recent approaches that incorporate the sparsity constraint into the POS induction process.
0
Clearly, explicitly modeling such a powerful constraint on tagging assignment has a potential to significantly improve the accuracy of an unsupervised part-of-speech tagger learned without a tagging dictionary.
This paper talks about KenLM: Faster and Smaller Language Model Queries.
0
Table 4 shows the results.
Most IE researchers have been creating paraphrase knowledge by hand and specific tasks.
0
The NE tagger is a rule-based system with 140 NE categories [Sekine et al. 2004].
This paper conducted research in the area of automatic paraphrase discovery.
0
Although this is not a precise criterion, most cases we evaluated were relatively clear-cut.
This paper talks about Exploiting Diversity in Natural Language Processing: Combining Parsers.
0
Through parser combination we have reduced the precision error rate by 30% and the recall error rate by 6% compared to the best previously published result.
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words.
0
Methods for expanding the dictionary include, of course, morphological rules, rules for segmenting personal names, as well as numeral sequences, expressions for dates, and so forth (Chen and Liu 1992; Wang, Li, and Chang 1992; Chang and Chen 1993; Nie, Jin, and Hannan 1994).
Due to many similarly performing systems, the author was not able to draw strong conclusions on the question of correlation of manual and automatic evaluation metrics.
0
The results of the manual and automatic evaluation of the participating system translations is detailed in the figures at the end of this paper.
Here we present two algorithms.
0
(4) gives In order to minimize Zt, at each iteration the final algorithm should choose the weak hypothesis (i.e., a feature xt) which has values for W+ and W_ that minimize Equ.
The AdaBoost algorithm was developed for supervised learning.
0
Taking /V, to be the number of examples an algorithm classified correctly (where all gold standard items labeled noise were counted as being incorrect), we calculated two measures of accuracy: See Tab.
In this paper, the authors are of the opinion that the sequence models-based approaches usually treat token-level tag assignment as the primary latent variable.
0
For each language and setting, we report one-to-one (11) and many- to-one (m-1) accuracies.
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text.
0
2.
The overall performance of their joint framework demonstrated that a probability distribution obtained over mere syntactic contexts using a Treebank grammar and a data-driven lexicon outperformed upper bounds proposed by previous joint disambiguation systems and achieved segmentation and parsing results on a par with state-of-the-art standalone applications results.
0
Lexical and Morphological Ambiguity The rich morphological processes for deriving Hebrew stems give rise to a high degree of ambiguity for Hebrew space-delimited tokens.
However, using the top-level semantic classes of WordNet proved to be problematic as the class distinctions are too coarse.
0
For example, the passive voice pattern “<subject> were kidnapped” and the active voice pattern “kidnapped <direct object>” are merged into a single normalized pattern “kidnapped <patient>”.2 For the sake of sim plicity, we will refer to these normalized extraction patterns as caseframes.3 These caseframes can capture two types of contextual role information: (1) thematic roles corresponding to events (e.g, “<agent> kidnapped” or “kidnapped <patient>”), and (2) predicate-argument relations associated with both verbs and nouns (e.g., “kidnapped for <np>” or “vehicle with <np>”).
Combining multiple highly-accurate independent parsers yields promising results.
1
Combining multiple highly-accurate independent parsers yields promising results.
Two general approaches are presented and two combination techniques are described for each approach.
0
Adding the isolated constituents to our hypothesis parse could increase our expected recall, but in the cases we investigated it would invariably hurt our precision more than we would gain on recall.
Combining multiple highly-accurate independent parsers yields promising results.
0
The set of candidate constituents comes from the union of all the constituents suggested by the member parsers.
The manual evaluation of scoring translation on a graded scale from 1&#8211;5 seemed to be very hard to perform.
0
The differences in difficulty are better reflected in the BLEU scores than in the raw un-normalized manual judgements.
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration.
0
attaching to terms denoting human beings.
However, these approaches are ill-equipped to directly represent type-based constraints such as sparsity.
0
P St = n. β T VARIABLES ψ Y W : Word types (W1 ,.
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words.
0
att.
The fact that projective dependency parsers can never exactly reproduce the analyses found in non-projective tree banks is often neglected because of the relative scarcity of problematic constructions.
0
The rest of the paper is structured as follows.
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs.
0
Section 3 describes the complete coreference resolution model, which uses the contextual role knowledge as well as more traditional coreference features.
This paper discusses the Potsdam Commentary Corpus, a corpus of german assembeled by potsdam university.
0
Then, the remaining texts were annotated and cross-validated, always with discussions among the annotators.
They focused on phrases which two Named Entities, and proceed in two stages.
0
Notice that the CC-domain is a special case.
Evaluation was done automatically using the BLEU score and manually on fluency and adequacy.
0
Systran submitted their commercial rule-based system that was not tuned to the Europarl corpus.
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs.
0
3 These are not full case frames in the traditional sense, but they approximate a simple case frame with a single slot.
A large number of rules are needed for coverage of the domain, suggesting that a fairly large number of labeled examples should be required to train a classifier.
0
The following features were used: full-string=x The full string (e.g., for Maury Cooper, full- s tring=Maury_Cooper). contains(x) If the spelling contains more than one word, this feature applies for any words that the string contains (e.g., Maury Cooper contributes two such features, contains (Maury) and contains (Cooper) . allcapl This feature appears if the spelling is a single word which is all capitals (e.g., IBM would contribute this feature). allcap2 This feature appears if the spelling is a single word which is all capitals or full periods, and contains at least one period.
The authors believe automatic paraphrase discovery is an important component for building a fully automatic information extraction system.
0
As the two NE categories are the same, we can’t differentiate phrases with different orders of par ticipants – whether the buying company or the to-be-bought company comes first.
The experimental tests are carried out on the Verbmobil task which is a limited-domain spoken-language task.
0
Although the ultimate goal of the Verbmobil project is the translation of spoken language, the input used for the translation experiments reported on in this paper is the (more or less) correct orthographic transcription of the spoken sentences.
All the texts were annotated by two people.
0
Some relations are signalled by subordinating conjunctions, which clearly demarcate the range of the text spans related (matrix clause, embedded clause).
Here we show how non-projective dependency parsing can be achieved by combining a data driven projective parser with special graph transformation techniques.
0
In approaching this problem, a variety of different methods are conceivable, including a more or less sophisticated use of machine learning.
This paper conducted research in the area of automatic paraphrase discovery.
0
One possibility is to use n-grams based on mutual information.
In this work, they take a more direct approach and treat a word type and its allowed POS tags as a primary element of the model.
0
Clearly, explicitly modeling such a powerful constraint on tagging assignment has a potential to significantly improve the accuracy of an unsupervised part-of-speech tagger learned without a tagging dictionary.
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models.
0
A related point is that mutual information is helpful in augmenting existing electronic dictionaries, (cf.
In order to realize the full potential of dependency-based syntactic parsing, it is desirable to allow non-projective dependency structures.
0
Unlike Kahane et al. (1998), we do not regard a projectivized representation as the final target of the parsing process.
It is well-known that English constituency parsing models do not generalize to other languages and treebanks.
0
Nonetheless, parse quality is much lower in the joint model because a lattice is effectively a long sentence.
Das and Petrov, in this paper, approached inducing unsupervised part-of-speech taggers for languages that had no labeled training data, but had translated text in a resource-rich language.
0
A very small excerpt from an Italian-English graph is shown in Figure 1.
They have made use of local and global features to deal with the instances of same token in a document.
0
Among machine learning-based NERs, Identi- Finder has proven to be the best on the official MUC6 and MUC7 test data.
Koehn and Monz carried out an extensive manual and automatic evaluation of machine translation performance on European language pairs.
0
While we had up to 11 submissions for a translation direction, we did decide against presenting all 11 system outputs to the human judge.
The code is open source, has minimal dependencies, and offers both C++ and Java interfaces for integration.
0
For even larger models, storing counts (Talbot and Osborne, 2007; Pauls and Klein, 2011; Guthrie and Hepple, 2010) is a possibility.
Human judges also pointed out difficulties with the evaluation of long sentences.
0
While building a machine translation system is a serious undertaking, in future we hope to attract more newcomers to the field by keeping the barrier of entry as low as possible.
The fact that projective dependency parsers can never exactly reproduce the analyses found in non-projective tree banks is often neglected because of the relative scarcity of problematic constructions.
0
However, since we want to preserve as much of the original structure as possible, we are interested in finding a transformation that involves a minimal number of lifts.
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words.
0
Given names are most commonly two hanzi long, occasionally one hanzi long: there are thus four possible name types, which can be described by a simple set of context-free rewrite rules such as the following: 1.
Evaluation was done automatically using the BLEU score and manually on fluency and adequacy.
0
The normalized judgement per sentence is the raw judgement plus (0 minus average raw judgement for this judge on this sentence).
The approach has been successfully tested on the 8 000-word Verbmobil task.
0
A typical situation is shown in Figure 1.
For the perplexity and translation tasks, we used SRILM to build a 5-gram English language model on 834 million tokens from Europarl v6 (Koehn, 2005) and the 2011 Workshop on Machine Translation News Crawl corpus with duplicate lines removed.
0
The data structure was populated with 64-bit integers sampled uniformly without replacement.
Nevertheless, only a part of this corpus (10 texts), which the authors name "core corpus", is annotated with all this information.
0
In focus is in particular the correlation with rhetorical structure, i.e., the question whether specific rhetorical relations — or groups of relations in particular configurations — are signalled by speakers with prosodic means.
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.
0
6 Joint Segmentation and Parsing.
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.
0
However, when we pre- tag the input—as is recommended for English— we notice a 0.57% F1 improvement.
This paper presents methods to query N-gram language models, minimizing time and space costs.
0
The same numbers were used for each data structure.
This paper presents methods to query N-gram language models, minimizing time and space costs.
0
Juri Ganitkevitch answered questions about Joshua.
The authors cluster NE instance pairs based on the words in the context using bag-of-words methods.
0
A total of 13,976 phrases were grouped.
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers.
0
Word type N % Dic tion ary entr ies 2 , 5 4 3 9 7 . 4 7 Mor pho logi call y deri ved wor ds 3 0 . 1 1 Fore ign tran slite rati ons 9 0 . 3 4 Per son al na mes 5 4 2 . 0 7 cases.
This paper offers a broad insight into of Arabic constituency parsing by analyzing the interplay of linguistic phenomena, annotation choices, and model design.
0
A simple lexicalized PCFG with second order Markovization gives relatively poor performance: 75.95% F1 on the test set.8 But this figure is surprisingly competitive with a recent state-of-the-art baseline (Table 7).
They used graph-based label propagation for cross-lingual knowledge transfer and used the projected labels as features in an unsupervised model.
0
However, supervised methods rely on labeled training data, which is time-consuming and expensive to generate.
Evaluation was done automatically using the BLEU score and manually on fluency and adequacy.
0
The predominate focus of building systems that translate into English has ignored so far the difficult issues of generating rich morphology which may not be determined solely by local context.
It is probably the first analysis of Arabic parsing of this kind.
0
The number of ATB n-grams also falls below the WSJ sample size as the largest WSJ sample appeared in only 162 corpus positions.