source_text
stringlengths
27
368
label
int64
0
1
target_text
stringlengths
1
5.38k
They focused on phrases which two Named Entities, and proceed in two stages.
0
Extract NE instance pairs with contexts First, we extract NE pair instances with their context from the corpus.
The resulting model is compact, efficiently learnable and linguistically expressive.
0
These sequence models-based approaches commonly treat token-level tag assignment as the primary latent variable.
This paper discusses the Potsdam Commentary Corpus, a corpus of german assembeled by potsdam university.
0
—ested in correlations between prosody and dis course structure.
They employed a PCFG-based generative framework to make both syntactic and morphological disambiguation decisions which were not only theoretically clean and linguistically justified but also probabilistically appropriate and empirically sound.
0
Given that weights on all outgoing arcs sum up to one, weights induce a probability distribution on the lattice paths.
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement.
0
Proper-Name Identification.
Their empirical results demonstrate that the type-based tagger rivals state-of-the-art tag-level taggers which employ more sophisticated learning mechanisms to exploit similar constraints.
0
Statistics for all data sets are shown in Table 2.
Using less training data than other systems, their NER can perform as well as other state-of-the-art NERs.
0
For MUC7, there are also no published results on systems trained on only the official training data of 200 aviation disaster articles.
On the basis of that observation, they described a class of formalisms which they called Linear Context- Free Rewriting Systems (LCFRs), and showed they were recognizable in polynomial time and generated only semilinear languages.
0
Each production can push or pop symbols on the stack as can be seen in the following productions that generate tree of the form shown in Figure 4b.
The corpus was annoted with different linguitic information.
0
Thus we opted not to take the step of creating more precise written annotation guidelines (as (Carlson, Marcu 2001) did for English), which would then allow for measuring inter-annotator agreement.
The second algorithm builds on a boosting algorithm called AdaBoost.
0
We then discuss how we adapt and generalize a boosting algorithm, AdaBoost, to the problem of named entity classification.
NER is useful in many NLP applications such as information extraction, question answering, etc .Chieu and Ng have shown that the maximum entropy framework is able to use global information directly from various sources.
0
If is unique, then a feature (Unique, Zone) is set to 1, where Zone is the document zone where appears.
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs.
0
Each extraction pattern represents a linguistic expression and a syntactic position indicating where a role filler can be found.
This paper presents methods to query N-gram language models, minimizing time and space costs.
0
Since our destructor is an efficient call to munmap, bypassing the destructor favors only other packages.
It is well-known that English constituency parsing models do not generalize to other languages and treebanks.
0
Finally, we note that simple weighting gives nearly a 2% F1 improvement, whereas Goldberg and Tsarfaty (2008) found that unweighted lattices were more effective for Hebrew.
They incorporated instance-weighting into a mixture-model framework, and found that it yielded consistent improvements over a wide range of baselines.
0
The 2nd block contains the IR system, which was tuned by selecting text in multiples of the size of the EMEA training corpus, according to dev set performance.
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration.
0
Two measures that can be used to compare judgments are: 1.
The main result is that the combined system can recover non-projective dependencies with a precision sufficient to give a significant improvement in overall parsing accuracy, especially with respect to the exact match criterion, leading to the best reported performance for robust non-projective parsing of Czech.
0
First, in section 4, we evaluate the graph transformation techniques in themselves, with data from the Prague Dependency Treebank and the Danish Dependency Treebank.
An extended lexicon model is defined, and its likelihood is compared to a baseline lexicon model, which takes only single-word dependencies into account.
0
We are given a source string fJ 1 = f1:::fj :::fJ of length J, which is to be translated into a target string eI 1 = e1:::ei:::eI of length I. Among all possible target strings, we will choose the string with the highest probability: ^eI 1 = arg max eI 1 fPr(eI 1jfJ 1 )g = arg max eI 1 fPr(eI 1) Pr(fJ 1 jeI 1)g : (1) The argmax operation denotes the search problem, i.e. the generation of the output sentence in the target language.
In this paper, Ben and Riloff present a coreference resolver called BABAR that focuses on the use of contextual-role knowledge for coreference resolution.
0
The CFLex and CFNet knowledge sources provide positive evidence that a candidate NP and anaphor might be coreferent.
The texts were annotated with the RSTtool.
0
This paper, however, provides a comprehensive overview of the data collection effort and its current state.
They found replacing it with a ranked evaluation to be more suitable.
0
If one system is better in 95% of the sample sets, we conclude that its higher BLEU score is statistically significantly better.
These clusters are computed using an SVD variant without relying on transitional structure.
0
See Section 5.
Koehn and Monz carried out an extensive manual and automatic evaluation of machine translation performance on European language pairs.
0
The easiest language pair according to BLEU (English-French: 28.33) received worse manual scores than the hardest (English-German: 14.01).
It is annotated with several data: morphology, syntax, rhetorical structure, connectors, correference and informative structure.
0
Reiche’s colleagues will make sure that the concept is waterproof.
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names.
0
Roughly speaking, previous work can be divided into three categories, namely purely statistical approaches, purely lexi­ cal rule-based approaches, and approaches that combine lexical information with sta­ tistical information.
Across eight European languages, their approach results in an average absolute improvement of 10.4% over a state-of-the-art baseline, and 16.7% over vanilla hidden Markov models induced with the Expectation Maximization algorithm.
0
The focus of this work is on building POS taggers for foreign languages, assuming that we have an English POS tagger and some parallel text between the two languages.
In this paper, Ben and Riloff present a coreference resolver called BABAR that focuses on the use of contextual-role knowledge for coreference resolution.
0
Since pronouns carry little semantics of their own, resolving them depends almost entirely on context.
The three parsers were trained and tuned by their creators on various sections of the WSJ portion of the Penn Treebank.
0
Finally we show the combining techniques degrade very little when a poor parser is added to the set.
This paper conducted research in the area of automatic paraphrase discovery.
0
For example, the two NEs “Eastern Group Plc” and “Hanson Plc” have the following contexts.
This corpus has several advantages: it is annotated at different levels.
0
Then, the remaining texts were annotated and cross-validated, always with discussions among the annotators.
Here both parametric and non-parametric models are explored.
0
This is equivalent to the assumption used in probability estimation for naïve Bayes classifiers, namely that the attribute values are conditionally independent when the target value is given.
We chose one of four labels for each example: location, person, organization, or noise where the noise category was used for items that were outside the three categories.
0
.
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement.
0
This latter evaluation compares the performance of the system with that of several human judges since, as we shall show, even people do not agree on a single correct way to segment a text.
Their results show that their high performance NER use less training data than other systems.
0
Mikheev et al.
On the basis of that observation, they described a class of formalisms which they called Linear Context- Free Rewriting Systems (LCFRs), and showed they were recognizable in polynomial time and generated only semilinear languages.
0
TAG's can be used to give the structural descriptions discussed by Gazdar (1985) for the unbounded nested dependencies in Norwedish, for cross serial dependencies in Dutch subordinate clauses, and for the nestings of paired English complementizers.
The manual evaluation of scoring translation on a graded scale from 1–5 seems to be very hard to perform.
0
This is can not be the only explanation, since the discrepancy still holds, for instance, for out-of-domain French-English, where Systran receives among the best adequacy and fluency scores, but a worse BLEU score than all but one statistical system.
However, these approaches are ill-equipped to directly represent type-based constraints such as sparsity.
0
By design, they readily capture regularities at the token-level.
Instance-weighting approach improved over a wide range of baselines, giving gains of over 2 BLEU points over the best non-adapted baseline.
0
However, it is robust, efficient, and easy to implement.4 To perform the maximization in (7), we used the popular L-BFGS algorithm (Liu and Nocedal, 1989), which requires gradient information.
Using less training data than other systems, their NER can perform as well as other state-of-the-art NERs.
0
Another attempt at using global information can be found in (Borthwick, 1999).
Vijay-Shankar et all considered the structural descriptions produced by various grammatical formalisms in terms of the complexity of the paths and the relationship between paths in the sets of structural descriptions that each system can generate.
0
These two restrictions impose the constraint that the result of composing any two structures should be a structure whose "size" is the sum of its constituents plus some constant For example, the operation 4, discussed in the case of CFG's (in Section 4.1) adds the constant equal to the sum of the length of the strings VI, un+r• Since we are considering formalisms with arbitrary structures it is difficult to precisely specify all of the restrictions on the composition operations that we believe would appropriately generalize the concatenation operation for the particular structures used by the formalism.
Evaluation was done automatically using the BLEU score and manually on fluency and adequacy.
0
Since the inclusion of out-ofdomain test data was a very late decision, the participants were not informed of this.
The departure from the traditional token-based tagging approach allow them to explicitly capture type-level distributional properties of valid POS tag assignments as part of the model.
0
The type-level tag assignments T generate features associated with word types W . The tag assignments constrain the HMM emission parameters θ.
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs.
0
Nu mb er filters candidate if number doesn’t agree.
Most IE researchers have been creating paraphrase knowledge by hand and specific tasks.
0
In our experiment, we set the threshold of the TF/ITF score empirically using a small development corpus; a finer adjustment of the threshold could reduce the number of such keywords.
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration.
0
For each pair of judges, consider one judge as the standard,.
Due to many similarly performing systems, they are not able to draw strong conclusions on the question of correlation of manual and automatic evaluation metrics.
0
The confidence intervals are computed by bootstrap resampling for BLEU, and by standard significance testing for the manual scores, as described earlier in the paper.
In order to create good-sized vectors for similarity calculation, they had to set a high frequency threshold.
0
So, we set a threshold that at least two examples are required to build a link.
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance.
0
3.
The overall parsing accuracy obtained with the pseudo-projective approach is still lower than for the best projective parsers.
0
Formally, we define dependency graphs as follows: 3.
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words.
0
In a few cases, the criteria for correctness are made more explicit.
The three parsers were trained and tuned by their creators on various sections of the WSJ portion of the Penn Treebank.
0
Through parser combination we have reduced the precision error rate by 30% and the recall error rate by 6% compared to the best previously published result.
They showed that it was useful to abstract away from the details of the formalism, and examine the nature of their derivation process as reflected by properties their trees, find that several of the formalisms considered can be seen as being closely related since they have derivation tree sets with the same structure as those produced by Context-Free Grammars.
0
Each production can push or pop symbols on the stack as can be seen in the following productions that generate tree of the form shown in Figure 4b.
The code is open source, has minimal dependencies, and offers both C++ and Java interfaces for integration.
0
While the paper mentioned a sorted variant, code was never released.
In this work, they take a more direct approach and treat a word type and its allowed POS tags as a primary element of the model.
0
Our analysis and comparison focuses primarily on the one-to-one accuracy since it is a stricter metric than many-to-one accuracy, but also report many-to-one for completeness.
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs.
0
Analysis of the data revealed that the contextual role knowledge is especially helpful for resolving pronouns because, in general, they are semantically weaker than definite NPs.
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration.
0
With regard to purely morphological phenomena, certain processes are not han­ dled elegantly within the current framework Any process involving reduplication, for instance, does not lend itself to modeling by finite-state techniques, since there is no way that finite-state networks can directly implement the copying operations required.
They showed that it was useful to abstract away from the details of the formalism, and examine the nature of their derivation process as reflected by properties their trees, find that several of the formalisms considered can be seen as being closely related since they have derivation tree sets with the same structure as those produced by Context-Free Grammars.
0
The edge from the root to the subtree for the derivation of 7i is labeled by the address ni.
The manual evaluation of scoring translation on a graded scale from 1–5 seemed to be very hard to perform.
0
Automatic scores are computed on a larger tested than manual scores (3064 sentences vs. 300–400 sentences). collected manual judgements, we do not necessarily have the same sentence judged for both systems (judges evaluate 5 systems out of the 8–10 participating systems).
They focused on phrases which two Named Entities, and proceed in two stages.
0
So, there is a limitation that IE can only be performed for a predefined task, like “corporate mergers” or “management succession”.
The approach has been successfully tested on the 8 000-word Verbmobil task.
0
Subsets of partial hypotheses with coverage sets C of increasing cardinality c are processed.
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.
0
We can use the preceding linguistic and annotation insights to build a manually annotated Arabic grammar in the manner of Klein and Manning (2003).
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text.
0
For novel texts, no lexicon that consists simply of a list of word entries will ever be entirely satisfactory, since the list will inevitably omit many constructions that should be considered words.
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs.
0
Although these existential NPs do not need a prior referent, they may occur multiple times in a document.
There are clustering approaches that assign a single POS tag to each word type.
0
(2010), we adopt a simpler na¨ıve Bayes strategy, where all features are emitted independently.
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names.
0
The use of the Good-Turing equation presumes suitable estimates of the unknown expectations it requires.
Vijay-Shankar et all considered the structural descriptions produced by various grammatical formalisms in terms of the complexity of the paths and the relationship between paths in the sets of structural descriptions that each system can generate.
0
The linear and nonerasing assumptions about the operations discussed in Section 4.1 require that each z, and yk is used exactly once to define the strings zi, ,z1,3.
Due to many similarly performing systems, they are not able to draw strong conclusions on the question of correlation of manual and automatic evaluation metrics.
0
Replacing this with an ranked evaluation seems to be more suitable.
Nevertheless, only a part of this corpus (10 texts), which the authors name "core corpus", is annotated with all this information.
0
• Bridging links: the annotator is asked to specify the type as part-whole, cause-effect (e.g., She had an accident.
Throughout this paper we compare with several packages: SRILM 1.5.12 (Stolcke, 2002) is a popular toolkit based on tries used in several decoders.
0
However, their inverted variant implements a reverse trie using less CPU and the same amount of memory7.
Foster et all describe a new approach to SMT adaptation that weights out-of-domain phrase pairs according to their relevance to the target domain, determined by both how similar to it they appear to be, and whether they belong to general language or not.
0
(7) φ s,t This is a somewhat less direct objective than used by Matsoukas et al, who make an iterative approximation to expected TER.
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words.
0
Word type N % Dic tion ary entr ies 2 , 5 4 3 9 7 . 4 7 Mor pho logi call y deri ved wor ds 3 0 . 1 1 Fore ign tran slite rati ons 9 0 . 3 4 Per son al na mes 5 4 2 . 0 7 cases.
This paper talks about Unsupervised Models for Named Entity Classification.
0
(3)) to be defined over unlabeled as well as labeled instances.
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.
0
All experiments use ATB parts 1–3 divided according to the canonical split suggested by Chiang et al.
The texts were annotated with the RSTtool.
0
Upon identifying an anaphoric expression (currently restricted to: pronouns, prepositional adverbs, definite noun phrases), the an- notator first marks the antecedent expression (currently restricted to: various kinds of noun phrases, prepositional phrases, verb phrases, sentences) and then establishes the link between the two.
The features were weighted within a logistic model that gave an overall weight that was applied to the phrase pair and MAP-smoothed relative-frequency estimates which were combined linearly with relative-frequency estimates from an in-domain phrase table.
0
We will also directly compare with a baseline similar to the Matsoukas et al approach in order to measure the benefit from weighting phrase pairs (or ngrams) rather than full sentences.
The second algorithm builds on a boosting algorithm called AdaBoost.
0
The algorithm builds two classifiers iteratively: each iteration involves minimization of a continuously differential function which bounds the number of examples on which the two classifiers disagree.
BABAR's performance in both domains of terrorism and natural disaster, and the contextual-role knowledge in pronouns have shown successful results.
0
The terrorism examples reflect fairly obvious relationships: people who are murdered are killed; agents that “report” things also “add” and “state” things; crimes that are “perpetrated” are often later “condemned”.
The authors show that PATB is similar to other tree-banks but that annotation consistency remains low.
0
(2008) reported agreement between the teams (measured with Evalb) at 93.8% F1, the level of the CTB.
There is no global pruning.
0
The following two error criteria are used in our experiments: mWER: multi-reference WER: We use the Levenshtein distance between the automatic translation and several reference translations as a measure of the translation errors.
Das and Petrov, in this paper, approached inducing unsupervised part-of-speech taggers for languages that had no labeled training data, but had translated text in a resource-rich language.
0
We use two different similarity functions to define the edge weights among the foreign vertices and between vertices from different languages.
Evaluation was done automatically using the BLEU score and manually on fluency and adequacy.
0
The way judgements are collected, human judges tend to use the scores to rank systems against each other.
The AdaBoost algorithm was developed for supervised learning.
0
Again, this deserves further investigation.
However, these approaches are ill-equipped to directly represent type-based constraints such as sparsity.
0
The relevant variables are the set of token-level tags that appear before and after each instance of the ith word type; we denote these context pairs with the set {(tb, ta)} and they are contained in t(−i).
The PROBING data structure uses linear probing hash tables and is designed for speed.
0
Performance improvements transfer to the Moses (Koehn et al., 2007), cdec (Dyer et al., 2010), and Joshua (Li et al., 2009) translation systems where our code has been integrated.
Combining multiple highly-accurate independent parsers yields promising results.
0
Once again we present both a non-parametric and a parametric technique for this task.
It is annotated with several data: morphology, syntax, rhetorical structure, connectors, correference and informative structure.
0
The choice of the genre commentary resulted from the fact that an investigation of rhetorical structure, its interaction with other aspects of discourse structure, and the prospects for its automatic derivation are the key motivations for building up the corpus.
The AdaBoost algorithm was developed for supervised learning.
0
This intuition is born out by the experimental results.
A large number of rules are needed for coverage of the domain, suggesting that a fairly large number of labeled examples should be required to train a classifier.
0
The following algorithm was then used to induce new rules: Let Count' (x) be the number of times feature x is seen with some known label in the training data.
They have made use of local and global features to deal with the instances of same token in a document.
0
The features we used can be divided into 2 classes: local and global.
NER is useful in many NLP applications such as information extraction, question answering, etc .Chieu and Ng have shown that the maximum entropy framework is able to use global information directly from various sources.
0
both MENE and IdentiFinder used more training data than we did (we used only the official MUC 6 and MUC7 training data).
The AdaBoost algorithm was developed for supervised learning.
0
I = 1X21 N and N is a "medium" sized number so that it is feasible to collect 0(N) unlabeled examples.
This paper presents methods to query N-gram language models, minimizing time and space costs.
0
Memory mapping also allows the same model to be shared across processes on the same machine.
An extended lexicon model is defined, and its likelihood is compared to a baseline lexicon model, which takes only single-word dependencies into account.
0
Note that in line 4 the last visited position for the successor hypothesis must be m. Otherwise , there will be four uncovered positions for the predecessor hypothesis violating the restriction.
Human judges also pointed out difficulties with the evaluation of long sentences.
0
We settled on contrastive evaluations of 5 system outputs for a single test sentence.
They have made use of local and global features to deal with the instances of same token in a document.
0
Reference resolution involves finding words that co-refer to the same entity.
A large number of rules are needed for coverage of the domain, suggesting that a fairly large number of labeled examples should be required to train a classifier.
0
A spelling rule might be a simple look-up for the string (e.g., a rule that Honduras is a location) or a rule that looks at words within a string (e.g., a rule that any string containing Mr. is a person).
Their empirical results demonstrate that the type-based tagger rivals state-of-the-art tag-level taggers which employ more sophisticated learning mechanisms to exploit similar constraints.
0
A promising direction for future work is to explicitly model a distribution over tags for each word type.
The second algorithm builds on a boosting algorithm called AdaBoost.
0
On each step CoBoost searches for a feature and a weight so as to minimize either 40 or 40.
The experimental tests are carried out on the Verbmobil task which is a limited-domain spoken-language task.
0
the number of permutations carried out for the word reordering is given.
They proposed an unsupervised method to discover paraphrases from a large untagged corpus.
0
In this specific case, as these two titles could fill the same column of an IE table, we regarded them as paraphrases for the evaluation.