source_text
stringlengths
27
368
label
int64
0
1
target_text
stringlengths
1
5.38k
The problem of coreference resolution has received considerable attention, including theoretical discourse models and supervised machine learning systems.
0
The terrorism examples reflect fairly obvious relationships: people who are murdered are killed; agents that “report” things also “add” and “state” things; crimes that are “perpetrated” are often later “condemned”.
In this work, they take a more direct approach and treat a word type and its allowed POS tags as a primary element of the model.
0
The type-level posterior term can be computed according to, P (Ti|W , T −i, β) ∝ Note that each round of sampling Ti variables takes time proportional to the size of the corpus, as with the standard token-level HMM.
It is annotated with several data: morphology, syntax, rhetorical structure, connectors, correference and informative structure.
0
7 www.cis.upenn.edu/∼pdtb/ 8 www.eml-research.de/english/Research/NLP/ Downloads had to buy a new car.
This paper conducted research in the area of automatic paraphrase discovery.
0
As can be seen in the example, the first two phrases have a different order of NE names from the last two, so we can determine that the last two phrases represent a reversed relation.
They showed that it was useful to abstract away from the details of the formalism, and examine the nature of their derivation process as reflected by properties their trees, find that several of the formalisms considered can be seen as being closely related since they have derivation tree sets with the same structure as those produced by Context-Free Grammars.
0
CFG's, TAG's, MCTAG's and HG's are all members of this class since they satisfy these restrictions.
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words.
0
These are shown, with their associated costs, as follows: ABj nc 4.0 AB C/jj 6.0 CD /vb 5.
the "Potsdam Commentary Corpus" or PCC consists of 170 commentaries from Ma¨rkische Allgemeine Zeitung, a German regional daily.
0
2.1 Part-of-speech tags.
the "Potsdam Commentary Corpus" or PCC consists of 170 commentaries from Ma¨rkische Allgemeine Zeitung, a German regional daily.
0
Within the RST “user community” there has also been discussion whether two levels of discourse structure should not be systematically distinguished (intentional versus informational).
They believe that global context is useful in most languages, as it is a natural tendency for authors to use abbreviations on entities already mentioned previously.
0
If is a number string (such as one, two, etc), then the feature NumberString is set to 1.
The approach assumes that the word reordering is restricted to a few positions in the source sentence.
0
Additionally, it works about 3 times as fast as the IBM style search.
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers.
0
We thank United Informatics for providing us with our corpus of Chinese text, and BDC for the 'Behavior ChineseEnglish Electronic Dictionary.'
The fact that projective dependency parsers can never exactly reproduce the analyses found in non-projective tree banks is often neglected because of the relative scarcity of problematic constructions.
0
The dependency graph in Figure 1 satisfies all the defining conditions above, but it fails to satisfy the condition ofprojectivity (Kahane et al., 1998): The arc connecting the head jedna (one) to the dependent Z (out-of) spans the token je (is), which is not dominated by jedna.
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text.
0
As we have seen, the lexicon of basic words and stems is represented as a WFST; most arcs in this WFST represent mappings between hanzi and pronunciations, and are costless.
The departure from the traditional token-based tagging approach allow them to explicitly capture type-level distributional properties of valid POS tag assignments as part of the model.
0
In total there are O(K 2) parameters associated with the transition parameters.
Due to many similarly performing systems, the author was not able to draw strong conclusions on the question of correlation of manual and automatic evaluation metrics.
0
If one system is better in 95% of the sample sets, we conclude that its higher BLEU score is statistically significantly better.
Human judges also pointed out difficulties with the evaluation of long sentences.
0
One annotator suggested that this was the case for as much as 10% of our test sentences.
We chose one of four labels for each example: location, person, organization, or noise where the noise category was used for items that were outside the three categories.
0
Many statistical or machine-learning approaches for natural language problems require a relatively large amount of supervision, in the form of labeled training examples.
Nevertheless, only a part of this corpus (10 texts), which the authors name "core corpus", is annotated with all this information.
0
Consequently, we implemented our own annotation tool ConAno in Java (Stede, Heintze 2004), which provides specifically the functionality needed for our purpose.
The authors in this paper describe a search procedure for statistical machine translation (MT) based on dynamic programming (DP).
0
6.
The authors show that PATB is similar to other tree-banks but that annotation consistency remains low.
0
At the same time, the n-gram error rate is sensitive to samples with extreme n-gram counts.
The second algorithm builds on a boosting algorithm called AdaBoost.
0
Before describing the unsupervised case we first describe the supervised version of the algorithm: Input to the learning algorithm: n labeled examples of the form (xi, y„). y, is the label of the ith example (given that there are k possible labels, y, is a member of y = {1 ... 0). xi is a set of mi features {x,1, Xi2 .
The problem of coreference resolution has received considerable attention, including theoretical discourse models and supervised machine learning systems.
0
We measured recall (Rec), precision (Pr), and the F-measure (F) with recall and precision equally weighted.
This paper presents a maximum entropy-based named entity recognizer (NER).
0
It uses a maximum entropy framework and classifies each word given its features.
In this paper the author evaluates machine translation performance for six European language pairs that participated in a shared task: translating French, German, Spanish texts to English and back.
0
We divide up each test set into blocks of 20 sentences (100 blocks for the in-domain test set, 53 blocks for the out-of-domain test set), check for each block, if one system has a higher BLEU score than the other, and then use the sign test.
Nevertheless, only a part of this corpus (10 texts), which the authors name "core corpus", is annotated with all this information.
0
The general idea for the knowledge- based part is to have the system use as much information as it can find at its disposal to produce a target representation as specific as possible and as underspecified as necessary.
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text.
0
Word type N % Dic tion ary entr ies 2 , 5 4 3 9 7 . 4 7 Mor pho logi call y deri ved wor ds 3 0 . 1 1 Fore ign tran slite rati ons 9 0 . 3 4 Per son al na mes 5 4 2 . 0 7 cases.
The code is open source, has minimal dependencies, and offers both C++ and Java interfaces for integration.
0
Memory usage is the same as with binary search and lower than with set.
In this paper the author evaluates machine translation performance for six European language pairs that participated in a shared task: translating French, German, Spanish texts to English and back.
1
We carried out an extensive manual and automatic evaluation of machine translation performance on European language pairs.
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers.
0
Roughly speaking, previous work can be divided into three categories, namely purely statistical approaches, purely lexi­ cal rule-based approaches, and approaches that combine lexical information with sta­ tistical information.
They showed the efficacy of graph-based label propagation for projecting part-of-speech information across languages.
0
Our work is closest to that of Yarowsky and Ngai (2001), but differs in two important ways.
Explanations for this phenomenon are relative informativeness of lexicalization, insensitivity to morphology and the effect of variable word order and these factors lead to syntactic disambiguation.
0
This is the form of recursive levels in iDafa constructs.
The three parsers were trained and tuned by their creators on various sections of the WSJ portion of the Penn Treebank.
0
The PCFG was trained from the same sections of the Penn Treebank as the other three parsers.
Evaluation was done automatically using the BLEU score and manually on fluency and adequacy.
0
We divide up each test set into blocks of 20 sentences (100 blocks for the in-domain test set, 53 blocks for the out-of-domain test set), check for each block, if one system has a higher BLEU score than the other, and then use the sign test.
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names.
0
Indeed, as we shall show in Section 5, even human judges differ when presented with the task of segmenting a text into words, so a definition of the criteria used to determine that a given segmentation is correct is crucial before one can interpret such measures.
The authors show that PATB is similar to other tree-banks but that annotation consistency remains low.
0
The human evaluators were a non-native, fluent Arabic speaker (the first author) for the ATB and a native English speaker for the WSJ.7 Table 5 shows type- and token-level error rates for each corpus.
NER is useful in many NLP applications such as information extraction, question answering, etc .Chieu and Ng have shown that the maximum entropy framework is able to use global information directly from various sources.
0
ICOC and CSPP contributed the greatest im provements.
The experimental tests are carried out on the Verbmobil task which is a limited-domain spoken-language task.
0
The quasi-monotone search performs best in terms of both error rates mWER and SSER.
We chose one of four labels for each example: location, person, organization, or noise where the noise category was used for items that were outside the three categories.
0
AdaBoost.MH can be applied to the problem using these pseudolabels in place of supervised examples.
Finally, several coreference systems have successfully incorporated anaphoricity determination modules.
0
First, we parsed the training corpus, collected all the noun phrases, and looked up each head noun in WordNet (Miller, 1990).
Replacing this with a ranked evaluation seems to be more suitable.
0
About half of the participants of last year’s shared task participated again.
They showed that it was useful to abstract away from the details of the formalism, and examine the nature of their derivation process as reflected by properties their trees, find that several of the formalisms considered can be seen as being closely related since they have derivation tree sets with the same structure as those produced by Context-Free Grammars.
0
For example, in TAG's a derived auxiliary tree spans two substrings (to the left and right of the foot node), and the adjunction operation inserts another substring (spanned by the subtree under the node where adjunction takes place) between them (see Figure 3).
Their work is closely related to recent approaches that incorporate the sparsity constraint into the POS induction process.
0
We observe similar trends when using another measure – type-level accuracy (defined as the fraction of words correctly assigned their majority tag), according to which La ng ua ge M etr ic B K 10 E M B K 10 L B F G S G 10 F EA T S B es t F EA T S M ed ia n E ng lis h 1 1 m 1 4 8 . 3 6 8 . 1 5 6 . 0 7 5 . 5 – – 5 0 . 9 6 6 . 4 4 7 . 8 6 6 . 4 D an is h 1 1 m 1 4 2 . 3 6 6 . 7 4 2 . 6 5 8 . 0 – – 5 2 . 1 6 1 . 2 4 3 . 2 6 0 . 7 D ut ch 1 1 m 1 5 3 . 7 6 7 . 0 5 5 . 1 6 4 . 7 – – 5 6 . 4 6 9 . 0 5 1 . 5 6 7 . 3 Po rtu gu es e 1 1 m 1 5 0 . 8 7 5 . 3 4 3 . 2 7 4 . 8 44 .5 69 .2 6 4 . 1 7 4 . 5 5 6 . 5 7 0 . 1 S pa ni sh 1 1 m 1 – – 4 0 . 6 7 3 . 2 – – 5 8 . 3 6 8 . 9 5 0 . 0 5 7 . 2 Table 4: Comparison of our method (FEATS) to state-of-the-art methods.
They showed better grammars to improve performance on both morphological and syntactic tasks, providing support for the advantage of a joint framework over pipelined or factorized ones.
0
In addition, as the CRF and PCFG look at similar sorts of information from within two inherently different models, they are far from independent and optimizing their product is meaningless.
Due to many similarly performing systems, the author was not able to draw strong conclusions on the question of correlation of manual and automatic evaluation metrics.
0
Our initial experimentation with the evaluation tool showed that this is often too overwhelming.
They believe that global context is useful in most languages, as it is a natural tendency for authors to use abbreviations on entities already mentioned previously.
0
We have shown that the maximum entropy framework is able to use global information directly.
The TRIE model uses less memory than the smallest lossless alternative and is still faster than SRILM.
0
KenLM: Faster and Smaller Language Model Queries
They showed that it was useful to abstract away from the details of the formalism, and examine the nature of their derivation process as reflected by properties their trees, find that several of the formalisms considered can be seen as being closely related since they have derivation tree sets with the same structure as those produced by Context-Free Grammars.
0
We place certain restrictions on the composition operations of LCFRS's, restrictions that are shared by the composition operations of the constrained grammatical systems that we have considered.
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.
0
Two common cases are the attribu tive adjective and the process nominal _; maSdar, which can have a verbal reading.4 At tributive adjectives are hard because they are or- thographically identical to nominals; they are inflected for gender, number, case, and definiteness.
NER is useful in many NLP applications such as information extraction, question answering, etc .Chieu and Ng have shown that the maximum entropy framework is able to use global information directly from various sources.
0
In (Bikel et al., 1997) and (Bikel et al., 1999), performance was plotted against training data size to show how performance improves with training data size.
They showed that it was useful to abstract away from the details of the formalism, and examine the nature of their derivation process as reflected by properties their trees, find that several of the formalisms considered can be seen as being closely related since they have derivation tree sets with the same structure as those produced by Context-Free Grammars.
0
With the additional assumptions, inspired by Rounds (1985), we can show that members of this class can be recognized in polynomial time.
From a DP-based solution to the traveling salesman problem, they present a novel technique to restrict the possible word reordering between source and target language in order to achieve an eÆcient search algorithm.
0
In Eq.
The code is open source, has minimal dependencies, and offers both C++ and Java interfaces for integration.
0
Therefore, for n-gram wn1 , all leftward extensions wn0 are an adjacent block in the n + 1-gram array.
There is no global pruning.
0
However, dynamic programming can be used to find the shortest tour in exponential time, namely in O(n22n), using the algorithm by Held and Karp.
In order to realize the full potential of dependency-based syntactic parsing, it is desirable to allow non-projective dependency structures.
0
Our experiments all concern the analytical annotation, and the first experiment is based only on the training part.
The overall parsing accuracy obtained with the pseudo-projective approach is still lower than for the best projective parsers.
0
Before we turn to the evaluation, however, we need to introduce the data-driven dependency parser used in the latter experiments.
Combining multiple highly-accurate independent parsers yields promising results.
0
The machine learning community has been in a similar situation and has studied the combination of multiple classifiers (Wolpert, 1992; Heath et al., 1996).
BABAR's performance in both domains of terrorism and natural disaster, and the contextual-role knowledge in pronouns have shown successful results.
0
Sometimes, however, these beliefs can be contradictory.
From a DP-based solution to the traveling salesman problem, they present a novel technique to restrict the possible word reordering between source and target language in order to achieve an eÆcient search algorithm.
0
The final score is obtained from: max e;e0 j2fJ􀀀L;;Jg p($je; e0) Qe0 (e; I; f1; ; Jg; j); where p($je; e0) denotes the trigram language model, which predicts the sentence boundary $ at the end of the target sentence.
The authors believe automatic paraphrase discovery is an important component for building a fully automatic information extraction system.
0
Evaluation of links A link between two sets is considered correct if the majority of phrases in both sets have the same meaning, i.e. if the link indicates paraphrase.
The PROBING data structure uses linear probing hash tables and is designed for speed.
0
Otherwise, the scope of the search problem shrinks recursively: if A[pivot] < k then this becomes the new lower bound: l +— pivot; if A[pivot] > k then u +— pivot.
They showed better grammars to improve performance on both morphological and syntactic tasks, providing support for the advantage of a joint framework over pipelined or factorized ones.
0
A compatible view is presented by Charniak et al. (1996) who consider the kind of probabilities a generative parser should get from a PoS tagger, and concludes that these should be P(w|t) “and nothing fancier”.3 In our setting, therefore, the Lattice is not used to induce a probability distribution on a linear context, but rather, it is used as a common-denominator of state-indexation of all segmentations possibilities of a surface form.
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.
0
evaluated to account for the same fraction of the data.
The authors believe automatic paraphrase discovery is an important component for building a fully automatic information extraction system.
0
We concentrate on those sets.
In this paper, Ben and Riloff present a coreference resolver called BABAR that focuses on the use of contextual-role knowledge for coreference resolution.
0
Each extraction pattern represents a linguistic expression and a syntactic position indicating where a role filler can be found.
This assumption, however, is not inherent to type-based tagging models.
0
This model admits a simple Gibbs sampling algorithm where the number of latent variables is proportional to the number of word types, rather than the size of a corpus as for a standard HMM sampler (Johnson, 2007).
One can trivially create situations in which strictly binary-branching trees are combined to create a tree with only the root node and the terminal nodes, a completely flat structure.
0
Mi(c) is a binary function returning t when parser i (from among the k parsers) suggests constituent c should be in the parse.
It outperformed strong unsupervised baselines as well as approaches that relied on direct projections, and bridged the gap between purely supervised and unsupervised POS tagging models.
0
On the English side, however, the vertices (denoted by Ve) correspond to word types.
Their empirical results demonstrate that the type-based tagger rivals state-of-the-art tag-level taggers which employ more sophisticated learning mechanisms to exploit similar constraints.
0
This paper proposes a simple and effective tagging method that directly models tag sparsity and other distributional properties of valid POS tag assignments.
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models.
0
In many cases these failures in recall would be fixed by having better estimates of the actual prob­ abilities of single-hanzi words, since our estimates are often inflated.
They extended previous work on discriminative weighting by using a finer granularity, focusing on the properties of instances rather than corpus components, and used simpler training procedure.
0
It is difficult to directly compare the Matsoukas et al results with ours, since our out-of-domain corpus is homogeneous; given heterogeneous training data, however, it would be trivial to include Matsoukas-style identity features in our instance-weighting model.
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text.
0
Next, we represent the input sentence as an unweighted finite-state acceptor (FSA) I over H. Let us assume the existence of a function Id, which takes as input an FSA A, and produces as output a transducer that maps all and only the strings of symbols accepted by A to themselves (Kaplan and Kay 1994).
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text.
0
Precision.
In this paper, the authors proposed an approach for instance-weighting phrase pairs in an out-of-domain corpus in order to improve in-domain performance.
0
The toplevel weights are trained to maximize a metric such as BLEU on a small development set of approximately 1000 sentence pairs.
Foster et all describe a new approach to SMT adaptation that weights out-of-domain phrase pairs according to their relevance to the target domain, determined by both how similar to it they appear to be, and whether they belong to general language or not.
0
), which precludes a single universal approach to adaptation.
From a DP-based solution to the traveling salesman problem, they present a novel technique to restrict the possible word reordering between source and target language in order to achieve an eÆcient search algorithm.
0
We apply a beam search concept as in speech recognition.
They found replacing it with a ranked evaluation to be more suitable.
0
This is the first time that we organized a large-scale manual evaluation.
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text.
0
We further thank Dr. J.-S.
It is well-known that English constituency parsing models do not generalize to other languages and treebanks.
0
Second, we show that although the Penn Arabic Treebank is similar to other tree- banks in gross statistical terms, annotation consistency remains problematic.
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models.
0
We have argued that the proposed method performs well.
This topic has been getting more attention, driven by the needs of various NLP applications.
0
Smith estimates Lotus will make a profit this quarter…”, our system extracts “Smith esti mates Lotus” as an instance.
They plan on extending instance-weighting to other standard SMT components and capture the degree of generality of phrase pairs.
0
It is difficult to directly compare the Matsoukas et al results with ours, since our out-of-domain corpus is homogeneous; given heterogeneous training data, however, it would be trivial to include Matsoukas-style identity features in our instance-weighting model.
Throughout this paper we compare with several packages: SRILM 1.5.12 (Stolcke, 2002) is a popular toolkit based on tries used in several decoders.
0
Only IRSTLM does not support threading.
The texts were annotated with the RSTtool.
0
On the other hand, we are interested in the application of rhetorical analysis or ‘discourse parsing’ (3.2 and 3.3), in text generation (3.4), and in exploiting the corpus for the development of improved models of discourse structure (3.5).
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement.
0
Chinese word segmentation can be viewed as a stochastic transduction problem.
Their results suggested that it was possible to learn accurate POS taggers for languages which did not have any annotated data, but have translations into a resource-rich language.
0
We trained this model by optimizing the following objective function: Note that this involves marginalizing out all possible state configurations z for a sentence x, resulting in a non-convex objective.
In order to handle the necessary word reordering as an optimization problem within the dynamic programming approach, they describe a solution to the traveling salesman problem (TSP) which is based on dynamic programming.
0
For each extension a new position is added to the coverage set.
They believe that global context is useful in most languages, as it is a natural tendency for authors to use abbreviations on entities already mentioned previously.
0
Since MUC6, BBN' s Hidden Markov Model (HMM) based IdentiFinder (Bikel et al., 1997) has achieved remarkably good performance.
They incorporated instance-weighting into a mixture-model framework, and found that it yielded consistent improvements over a wide range of baselines.
0
The corpora for both settings are summarized in table 1.
The contextual rules are restricted and may not be applicable to every example, but the spelling rules are generally applicable and should have good coverage.
0
Note that Zt is a normalization constant that ensures the distribution Dt+i sums to 1; it is a function of the weak hypothesis ht and the weight for that hypothesis at chosen at the tth round.
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.
0
(2009b) evaluated the Bikel parser using the same ATB split, but only reported dev set results with gold POS tags for sentences of length ≤ 40.
From a DP-based solution to the traveling salesman problem, they present a novel technique to restrict the possible word reordering between source and target language in order to achieve an eÆcient search algorithm.
0
For each source word f, the list of its possible translations e is sorted according to p(fje) puni(e), where puni(e) is the unigram probability of the English word e. It is suÆcient to consider only the best 50 words.
Explanations for this phenomenon are relative informativeness of lexicalization, insensitivity to morphology and the effect of variable word order and these factors lead to syntactic disambiguation.
0
But Rehbein and van Genabith (2007) showed that Evalb should not be used as an indication of real difference— or similarity—between treebanks.
Replacing this with a ranked evaluation seems to be more suitable.
0
Pairwise comparison: We can use the same method to assess the statistical significance of one system outperforming another.
This assumption, however, is not inherent to type-based tagging models.
0
Comparison with state-of-the-art taggers For comparison we consider two unsupervised tag- gers: the HMM with log-linear features of Berg- Kirkpatrick et al.
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement.
0
gaolgaolxing4xing4 'happily' In the particular form of A-not-A reduplication illustrated in (3a), the first syllable of the verb is copied, and the negative markerbu4 'not' is inserted between the copy and the full verb.
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs.
0
“The gun” will be extracted by the caseframe “fired <patient>”.
Across eight European languages, their approach results in an average absolute improvement of 10.4% over a state-of-the-art baseline, and 16.7% over vanilla hidden Markov models induced with the Expectation Maximization algorithm.
0
We use a squared loss to penalize neighboring vertices that have different label distributions: kqi − qjk2 = Ey(qi(y) − qj(y))2, and additionally regularize the label distributions towards the uniform distribution U over all possible labels Y.
The departure from the traditional token-based tagging approach allow them to explicitly capture type-level distributional properties of valid POS tag assignments as part of the model.
0
Recent work has made significant progress on unsupervised POS tagging (Me´rialdo, 1994; Smith and Eisner, 2005; Haghighi and Klein, 2006; Johnson,2007; Goldwater and Griffiths, 2007; Gao and John son, 2008; Ravi and Knight, 2009).
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers.
0
Some approaches depend upon some form of con­ straint satisfaction based on syntactic or semantic features (e.g., Yeh and Lee [1991], which uses a unification-based approach).
This paper discusses the Potsdam Commentary Corpus, a corpus of german assembeled by potsdam university.
0
3.5 Improved models of discourse.