source_text
stringlengths
27
368
label
int64
0
1
target_text
stringlengths
1
5.38k
The use of global features has shown excellent result in the performance on MUC-6 and MUC-7 test data.
0
Among machine learning-based NERs, Identi- Finder has proven to be the best on the official MUC6 and MUC7 test data.
The PROBING data structure uses linear probing hash tables and is designed for speed.
0
RandLM is the clear winner in RAM utilization, but is also slower and lower quality.
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers.
0
There are clearly eight orthographic words in the example given, but if one were doing syntactic analysis one would probably want to consider I'm to consist of two syntactic words, namely I and am.
This corpus has several advantages: it is annotated at different levels.
0
The PCC is not the result of a funded project.
This paper presents a maximum entropy-based named entity recognizer (NER).
0
As each global feature group is added to the list of features, we see improvements to both MUC6 and MUC6 MUC7 Baseline 90.75% 85.22% + ICOC 91.50% 86.24% + CSPP 92.89% 86.96% + ACRO 93.04% 86.99% + SOIC 93.25% 87.22% + UNIQ 93.27% 87.24% Table 3: F-measure after successive addition of each global feature group Table 5: Comparison of results for MUC6 Systems MUC6 MUC7 No.
This corpus has several advantages: it is annotated at different levels.
0
information structure.
The overall performance of their joint framework demonstrated that a probability distribution obtained over mere syntactic contexts using a Treebank grammar and a data-driven lexicon outperformed upper bounds proposed by previous joint disambiguation systems and achieved segmentation and parsing results on a par with state-of-the-art standalone applications results.
0
Firstly, Hebrew unknown tokens are doubly unknown: each unknown token may correspond to several segmentation possibilities, and each segment in such sequences may be able to admit multiple PoS tags.
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers.
0
For irt the Good-Turing estimate just discussed gives us an estimate of p(unseen(f,) I f,)-the probability of observing a previously unseen instance of a construction in ft given that we know that we have a construction in f,.
They showed the efficacy of graph-based label propagation for projecting part-of-speech information across languages.
0
The foreign language vertices (denoted by Vf) correspond to foreign trigram types, exactly as in Subramanya et al. (2010).
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words.
0
21 In Chinese, numerals and demonstratives cannot modify nouns directly, and must be accompanied by.
They have made use of local and global features to deal with the instances of same token in a document.
0
Bikel et al.
These clusters are computed using an SVD variant without relying on transitional structure.
0
.., Wn ) (obs) P T : Tag assigns (T1 ,.
In this work, they take a more direct approach and treat a word type and its allowed POS tags as a primary element of the model.
0
These clusters are computed using an SVD variant without relying on transitional structure.
This paper talks about Unsupervised Models for Named Entity Classification.
0
AdaBoost was first introduced in (Freund and Schapire 97); (Schapire and Singer 98) gave a generalization of AdaBoost which we will use in this paper.
Replacing this with a ranked evaluation seems to be more suitable.
0
Given the limited number of judgements we received, we did not try to evaluate this.
For the perplexity and translation tasks, we used SRILM to build a 5-gram English language model on 834 million tokens from Europarl v6 (Koehn, 2005) and the 2011 Workshop on Machine Translation News Crawl corpus with duplicate lines removed.
0
RandLM’s stupid backoff variant stores counts instead of probabilities and backoffs.
The approach assumes that the word reordering is restricted to a few positions in the source sentence.
0
am 11.
In this paper, the authors proposed an approach for instance-weighting phrase pairs in an out-of-domain corpus in order to improve in-domain performance.
0
We have not explored this strategy.
The authors show that PATB is similar to other tree-banks but that annotation consistency remains low.
0
The ATB has a much higher fraction of nuclei per tree, and a higher type-level error rate.
Each out-of-domain phrase pair was characterized by a set of simple features intended to reflect how useful it would be.
0
Le m´edicament de r´ef´erence de Silapo est EPREX/ERYPO, qui contient de l’´epo´etine alfa.
BABAR's performance in both domains of terrorism and natural disaster, and the contextual-role knowledge in pronouns have shown successful results.
0
Consequently, we cut their evidence values in half to lessen their influence.
In this paper, Ben and Riloff present a coreference resolver called BABAR that focuses on the use of contextual-role knowledge for coreference resolution.
0
For each domain, we created a blind test set by manually annotating 40 doc uments with anaphoric chains, which represent sets of m3 (S) = ) X ∩Y =S 1 − ) m1 (X ) ∗ m2 (Y ) m1 (X ) ∗ m2 (Y ) (1) noun phrases that are coreferent (as done for MUC6 (MUC6 Proceedings, 1995)).
The PROBING data structure uses linear probing hash tables and is designed for speed.
0
Generalizing state minimization, the model could also provide explicit bounds on probability for both backward and forward extension.
The overall performance of their joint framework demonstrated that a probability distribution obtained over mere syntactic contexts using a Treebank grammar and a data-driven lexicon outperformed upper bounds proposed by previous joint disambiguation systems and achieved segmentation and parsing results on a par with state-of-the-art standalone applications results.
0
We refer to different readings as different analyses whereby the segments are deterministic given the sequence of PoS tags.
The overall performance of their joint framework demonstrated that a probability distribution obtained over mere syntactic contexts using a Treebank grammar and a data-driven lexicon outperformed upper bounds proposed by previous joint disambiguation systems and achieved segmentation and parsing results on a par with state-of-the-art standalone applications results.
0
Furthermore, the combination of pruning and vertical markovization of the grammar outperforms the Oracle results reported by Cohen and Smith.
They focused on phrases which two Named Entities, and proceed in two stages.
0
As can be seen in the example, the first two phrases have a different order of NE names from the last two, so we can determine that the last two phrases represent a reversed relation.
The texts were annotated with the RSTtool.
0
For displaying and querying the annoated text, we make use of the Annis Linguistic Database developed in our group for a large research effort (‘Sonderforschungsbereich’) revolving around 9 2.7 Information structure.
In this paper, the authors are of the opinion that the sequence models-based approaches usually treat token-level tag assignment as the primary latent variable.
0
For all languages we do not make use of a tagging dictionary.
The TRIE model uses less memory than the smallest lossless alternative and is still faster than SRILM.
0
Further, the probing hash table does only one random lookup per query, explaining why it is faster on large data.
Evaluation was done automatically using the BLEU score and manually on fluency and adequacy.
0
Training and testing is based on the Europarl corpus.
On the basis of that observation, they described a class of formalisms which they called Linear Context- Free Rewriting Systems (LCFRs), and showed they were recognizable in polynomial time and generated only semilinear languages.
0
For example, CFG's cannot produce trees of the form shown in Figure 1 in which there are nested dependencies between S and NP nodes appearing on the spine of the tree.
In this paper, Ben and Riloff present a coreference resolver called BABAR that focuses on the use of contextual-role knowledge for coreference resolution.
0
The probabilities are incorporated into the DempsterShafer model using Equation 1.
The resulting model is compact, efficiently learnable and linguistically expressive.
0
Conditioned on T , features of word types W are drawn.
Nevertheless, only a part of this corpus (10 texts), which the authors name "core corpus", is annotated with all this information.
0
As already pointed out in Section 2.4, current theories diverge not only on the number and definition of relations but also on apects of structure, i.e., whether a tree is sufficient as a representational device or general graphs are required (and if so, whether any restrictions can be placed on these graph’s structures — cf.
The authors show that PATB is similar to other tree-banks but that annotation consistency remains low.
0
Recently, lattices have been used successfully in the parsing of Hebrew (Tsarfaty, 2006; Cohen and Smith, 2007), a Semitic language with similar properties to Arabic.
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words.
0
May 1995).
This paper talks about Pseudo-Projective Dependency Parsing.
0
Thus, the arc from je to jedna will be labeled 5b↓ (to indicate that there is a syntactic head below it).
The approach has been successfully tested on the 8 000-word Verbmobil task.
0
There are 13 types of extensions needed to describe the verbgroup reordering.
They used graph-based label propagation for cross-lingual knowledge transfer and used the projected labels as features in an unsupervised model.
0
As a result, its POS tag needs to be induced in the “No LP” case, while the 11A word level paired-t-test is significant at p < 0.01 for Danish, Greek, Italian, Portuguese, Spanish and Swedish, and p < 0.05 for Dutch. correct tag is available as a constraint feature in the “With LP” case.
Combining multiple highly-accurate independent parsers yields promising results.
0
The hypothesized parse is then the set of constituents that are likely (P > 0.5) to be in the parse according to this model.
The manual evaluation of scoring translation on a graded scale from 1–5 seems to be very hard to perform.
0
We divide up each test set into blocks of 20 sentences (100 blocks for the in-domain test set, 53 blocks for the out-of-domain test set), check for each block, if one system has a higher BLEU score than the other, and then use the sign test.
The authors show that PATB is similar to other tree-banks but that annotation consistency remains low.
0
Preprocessing the raw trees improves parsing performance considerably.9 We first discard all trees dominated by X, which indicates errors and non-linguistic text.
Their results show that their high performance NER use less training data than other systems.
0
For example, if FCC and Federal Communications Commission are both found in a document, then Federal has A begin set to 1, Communications has A continue set to 1, Commission has A end set to 1, and FCC has A unique set to 1.
Here we show how non-projective dependency parsing can be achieved by combining a data driven projective parser with special graph transformation techniques.
0
As observed by Kahane et al. (1998), any (nonprojective) dependency graph can be transformed into a projective one by a lifting operation, which replaces each non-projective arc wj wk by a projective arc wi —* wk such that wi —*∗ wj holds in the original graph.
Finally, several coreference systems have successfully incorporated anaphoricity determination modules.
0
8 Volume 1, English language, 19961997, Format version 1, correction level 0 An ap ho r T e r r o r i s m R e c Pr F D i s a s t e r s R e c Pr F De f. NP s Pro no uns .43 .79 .55 .50 .72 .59 .42 .91 .58 .42 .82 .56 Tot al .46 .76 .57 .42 .87 .57 Table 2: General Knowledge Sources Table 4: Individual Performance of KSs for Terrorism Table 3: General + Contextual Role Knowledge Sources larger MUC4 and Reuters corpora.9 4.2 Experiments.
Here we present two algorithms.
0
The DL-CoTrain algorithm can be motivated as being a greedy method of satisfying the above 2 constraints.
It is annotated with several data: morphology, syntax, rhetorical structure, connectors, correference and informative structure.
0
Since 170 annotated texts constitute a fairly small training set, Reitter found that an overall recognition accuracy of 39% could be achieved using his method.
However, these approaches are ill-equipped to directly represent type-based constraints such as sparsity.
0
36.
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names.
0
Other kinds of productive word classes, such as company names, abbreviations (termed fijsuolxie3 in Mandarin), and place names can easily be 20 Note that 7 in E 7 is normally pronounced as leO, but as part of a resultative it is liao3..
It is well-known that English constituency parsing models do not generalize to other languages and treebanks.
0
Table 3: Dev set frequencies for the two most significant discourse markers in Arabic are skewed toward analysis as a conjunction.
We chose one of four labels for each example: location, person, organization, or noise where the noise category was used for items that were outside the three categories.
0
(Berland and Charniak 99) describe a method for extracting parts of objects from wholes (e.g., &quot;speedometer&quot; from &quot;car&quot;) from a large corpus using hand-crafted patterns.
In this paper, Ben and Riloff present a coreference resolver called BABAR that focuses on the use of contextual-role knowledge for coreference resolution.
0
The contextual role knowledge had the greatest impact on pronouns: +13% recall for terrorism and +15% recall for disasters, with a +1% precision gain in terrorism and a small precision drop of -3% in disasters.
This paper conducted research in the area of automatic paraphrase discovery.
0
We evaluated the results based on two metrics.
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words.
0
The performance was 80.99% recall and 61.83% precision.
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement.
0
We can 5 Recall that precision is defined to be the number of correct hits divided by the total number of items.
We chose one of four labels for each example: location, person, organization, or noise where the noise category was used for items that were outside the three categories.
0
(Blum and Mitchell 98) offer a promising formulation of redundancy, also prove some results about how the use of unlabeled examples can help classification, and suggest an objective function when training with unlabeled examples.
Throughout this paper we compare with several packages: SRILM 1.5.12 (Stolcke, 2002) is a popular toolkit based on tries used in several decoders.
0
Chris Dyer integrated the code into cdec.
We chose one of four labels for each example: location, person, organization, or noise where the noise category was used for items that were outside the three categories.
0
The new algorithm, which we call CoBoost, uses labeled and unlabeled data and builds two classifiers in parallel.
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text.
0
Others depend upon various lexical heuris­ tics: for example Chen and Liu (1992) attempt to balance the length of words in a three-word window, favoring segmentations that give approximately equal length for each word.
Explanations for this phenomenon are relative informativeness of lexicalization, insensitivity to morphology and the effect of variable word order and these factors lead to syntactic disambiguation.
0
In MSA, SVO usually appears in non-matrix clauses.
We chose one of four labels for each example: location, person, organization, or noise where the noise category was used for items that were outside the three categories.
0
To prevent this we &quot;smooth&quot; the confidence by adding a small value, e, to both W+ and W_, giving at = Plugging the value of at from Equ.
However, these approaches are ill-equipped to directly represent type-based constraints such as sparsity.
0
Following the setup of Johnson (2007), we use the whole of the Penn Treebank corpus for training and evaluation on English.
Most IE researchers have been creating paraphrase knowledge by hand and specific tasks.
0
As was explained in the results section, “strength” or “add” are not desirable keywords in the CC-domain.
The authors in this paper describe a search procedure for statistical machine translation (MT) based on dynamic programming (DP).
0
The Verbmobil task is an appointment scheduling task.
The three parsers were trained and tuned by their creators on various sections of the WSJ portion of the Penn Treebank.
0
Combining multiple highly-accurate independent parsers yields promising results.
The manual evaluation of scoring translation on a graded scale from 1&#8211;5 seemed to be very hard to perform.
0
Let say, if we find one system doing better on 20 of the blocks, and worse on 80 of the blocks, is it significantly worse?
A large number of rules are needed for coverage of the domain, suggesting that a fairly large number of labeled examples should be required to train a classifier.
0
In the appositive case, the contextual predictor was the head of the modifying appositive (president in the Maury Cooper example); in the second case, the contextual predictor was the preposition together with the noun it modifies (plant_in in the Georgia example).
This assumption, however, is not inherent to type-based tagging models.
0
For each cell, the first row corresponds to the result using the best hyperparameter choice, where best is defined by the 11 metric.
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration.
0
(a) IDictionary D I D:d/0.000 B:b/0.000 B:b/0.000 ( b ) ( c ) ( d ) I B e s t P a t h ( I d ( I ) o D * ) I cps:nd4.!l(l() Figure 2 An abstract example illustrating the segmentation algorithm.
In order to handle the necessary word reordering as an optimization problem within the dynamic programming approach, they describe a solution to the traveling salesman problem (TSP) which is based on dynamic programming.
0
In this paper, we have presented a new, eÆcient DP-based search procedure for statistical machine translation.
The second algorithm builds on a boosting algorithm called AdaBoost.
0
(7) is at 0 when: 1) Vi : sign(gi (xi)) = sign(g2 (xi)); 2) Ig3(xi)l oo; and 3) sign(gi (xi)) = yi for i = 1, , m. In fact, Zco provides a bound on the sum of the classification error of the labeled examples and the number of disagreements between the two classifiers on the unlabeled examples.
A large number of rules are needed for coverage of the domain, suggesting that a fairly large number of labeled examples should be required to train a classifier.
0
In this paper k = 3 (the three labels are person, organization, location), and we set a = 0.1.
This assumption, however, is not inherent to type-based tagging models.
0
.., Wn ) (obs) P T : Tag assigns (T1 ,.
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration.
0
The most popular approach to dealing with seg­ mentation ambiguities is the maximum matching method, possibly augmented with further heuristics.
We chose one of four labels for each example: location, person, organization, or noise where the noise category was used for items that were outside the three categories.
0
Unfortunately, Yarowsky's method is not well understood from a theoretical viewpoint: we would like to formalize the notion of redundancy in unlabeled data, and set up the learning task as optimization of some appropriate objective function.
This paper presents methods to query N-gram language models, minimizing time and space costs.
0
In all cases, the key is collapsed to its 64-bit hash.
All the texts were annotated by two people.
0
This concerns on the one hand the basic question of retrieval, i.e. searching for information across the annotation layers (see 3.1).
On the basis of that observation, they described a class of formalisms which they called Linear Context- Free Rewriting Systems (LCFRs), and showed they were recognizable in polynomial time and generated only semilinear languages.
0
The property of semilinearity is concerned only with the occurrence of symbols in strings and not their order.
This paper presents methods to query N-gram language models, minimizing time and space costs.
0
Much could be done to further reduce memory consumption.
They extended previous work on discriminative weighting by using a finer granularity, focusing on the properties of instances rather than corpus components, and used simpler training procedure.
0
The natural baseline (baseline) outperforms the pure IN system only for EMEA/EP fren.
On the basis of that observation, they described a class of formalisms which they called Linear Context- Free Rewriting Systems (LCFRs), and showed they were recognizable in polynomial time and generated only semilinear languages.
0
Nodes are annotated by the name of the composition operation used at that step in the derivation.
It is well-known that English constituency parsing models do not generalize to other languages and treebanks.
0
At the phrasal level, we remove all function tags and traces.
In this paper, Ben and Riloff present a coreference resolver called BABAR that focuses on the use of contextual-role knowledge for coreference resolution.
0
Unsupervised Learning of Contextual Role Knowledge for Coreference Resolution
While many systems had similar performance, the results offered interesting insights, especially, about the relative performance of statistical and rule-based systems.
0
We assumed that such a contrastive assessment would be beneficial for an evaluation that essentially pits different systems against each other.
There is no global pruning.
0
Translation errors are reported in terms of multireference word error rate (mWER) and subjective sentence error rate (SSER).
In order to create good-sized vectors for similarity calculation, they had to set a high frequency threshold.
0
In this specific case, as these two titles could fill the same column of an IE table, we regarded them as paraphrases for the evaluation.
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models.
0
Clearly the percentage of productively formed words is quite small (for this particular corpus), meaning that dictionary entries are covering most of the 15 GR is .73 or 96%..
They showed better grammars to improve performance on both morphological and syntactic tasks, providing support for the advantage of a joint framework over pipelined or factorized ones.
0
(Charniak et al., 1996).
Human judges also pointed out difficulties with the evaluation of long sentences.
0
Again, we can compute average scores for all systems for the different language pairs (Figure 6).
However, using the top-level semantic classes of WordNet proved to be problematic as the class distinctions are too coarse.
0
The terrorism examples reflect fairly obvious relationships: people who are murdered are killed; agents that “report” things also “add” and “state” things; crimes that are “perpetrated” are often later “condemned”.
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names.
0
The first is an evaluation of the system's ability to mimic humans at the task of segmenting text into word-sized units; the second evaluates the proper-name identification; the third measures the performance on morphological analysis.
Their work is closely related to recent approaches that incorporate the sparsity constraint into the POS induction process.
0
3 61.7 38.
The AdaBoost algorithm was developed for supervised learning.
0
In the appositive case, the contextual predictor was the head of the modifying appositive (president in the Maury Cooper example); in the second case, the contextual predictor was the preposition together with the noun it modifies (plant_in in the Georgia example).
This paper talks about KenLM: Faster and Smaller Language Model Queries.
0
Collisions between two keys in the table can be identified at model building time.
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance.
0
The use of weighted transducers in particular has the attractive property that the model, as it stands, can be straightforwardly interfaced to other modules of a larger speech or natural language system: presumably one does not want to segment Chinese text for its own sake but instead with a larger purpose in mind.
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models.
0
Two sets of examples from Gan are given in (1) and (2) (:::::: Gan's Appendix B, exx.
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration.
0
Foreign names are usually transliterated using hanzi whose sequential pronunciation mimics the source language pronunciation of the name.
Their results suggested that it was possible to learn accurate POS taggers for languages which did not have any annotated data, but have translations into a resource-rich language.
0
These tag distributions are used to initialize the label distributions over the English vertices in the graph.
Explanations for this phenomenon are relative informativeness of lexicalization, insensitivity to morphology and the effect of variable word order and these factors lead to syntactic disambiguation.
0
95 B a s e li n e ( S e lf t a g ) 70 a l l B i k e l ( v 1 . 2 ) B a s e l i n e ( P r e t a g ) 7 0 a l l G o l d P O S 70 0.7 70 0.801 278 0.7 52 0.794 278 0.7 71 0.804 295 0.7 52 0.796 295 0.7 75 0.808 309 77.
The main result is that the combined system can recover non-projective dependencies with a precision sufficient to give a significant improvement in overall parsing accuracy, especially with respect to the exact match criterion, leading to the best reported performance for robust non-projective parsing of Czech.
0
However, the overall percentage of non-projective arcs is less than 2% in PDT and less than 1% in DDT.