source_text
stringlengths
27
368
label
int64
0
1
target_text
stringlengths
1
5.38k
The approach assumes that the word reordering is restricted to a few positions in the source sentence.
0
(f1; ;mg n fl1g ; l) 3 (f1; ;mg n fl; l1; l2g ; l0) !
This paper talks about Unsupervised Models for Named Entity Classification.
0
We present two algorithms.
This paper discusses the Potsdam Commentary Corpus, a corpus of german assembeled by potsdam university.
0
When the connective is an adverbial, there is much less clarity as to the range of the spans.
They incorporated instance-weighting into a mixture-model framework, and found that it yielded consistent improvements over a wide range of baselines.
0
We will also directly compare with a baseline similar to the Matsoukas et al approach in order to measure the benefit from weighting phrase pairs (or ngrams) rather than full sentences.
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words.
0
(a) IDictionary D I D:d/0.000 B:b/0.000 B:b/0.000 ( b ) ( c ) ( d ) I B e s t P a t h ( I d ( I ) o D * ) I cps:nd4.!l(l() Figure 2 An abstract example illustrating the segmentation algorithm.
The approach has been successfully tested on the 8 000-word Verbmobil task.
0
4.3 Translation Experiments.
This paper talks about Exploiting Diversity in Natural Language Processing: Combining Parsers.
0
For each experiment we gave an nonparametric and a parametric technique for combining parsers.
Das and Petrov, in this paper, approached inducing unsupervised part-of-speech taggers for languages that had no labeled training data, but had translated text in a resource-rich language.
0
Overall, it gives improvements ranging from 1.1% for German to 14.7% for Italian, for an average improvement of 8.3% over the unsupervised feature-HMM model.
They believe that global context is useful in most languages, as it is a natural tendency for authors to use abbreviations on entities already mentioned previously.
0
Sequence of Initial Caps (SOIC): In the sentence Even News Broadcasting Corp., noted for its accurate reporting, made the erroneous announcement., a NER may mistake Even News Broadcasting Corp. as an organization name.
Finally, several coreference systems have successfully incorporated anaphoricity determination modules.
0
We also performed experiments to evaluate the impact of each type of contextual role knowledge separately.
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models.
0
The most popular approach to dealing with seg­ mentation ambiguities is the maximum matching method, possibly augmented with further heuristics.
This paper conducted research in the area of automatic paraphrase discovery.
0
For the experiments, we used four newswire corpora, the Los Angeles Times/Washington Post, The New York Times, Reuters and the Wall Street Journal, all published in 1995.
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.
0
(2006).
The AdaBoost algorithm was developed for supervised learning.
0
(5) and ht into Equ.
The texts were annotated with the RSTtool.
0
For the ‘core’ portion of PCC, we found that on average, 35% of the coherence relations in our RST annotations are explicitly signalled by a lexical connective.6 When adding the fact that connectives are often ambiguous, one has to conclude that prospects for an automatic analysis of rhetorical structure using shallow methods (i.e., relying largely on connectives) are not bright — but see Sections 3.2 and 3.3 below.
This paper discusses the Potsdam Commentary Corpus, a corpus of german assembeled by potsdam university.
0
The paper explains the design decisions taken in the annotations, and describes a number of applications using this corpus with its multi-layer annotation.
Their empirical results demonstrate that the type-based tagger rivals state-of-the-art tag-level taggers which employ more sophisticated learning mechanisms to exploit similar constraints.
0
This design leads to a significant reduction in the computational complexity of training and inference.
Their results show that their high performance NER use less training data than other systems.
0
The baseline system in Table 3 refers to the maximum entropy system that uses only local features.
They proposed an unsupervised method to discover paraphrases from a large untagged corpus.
0
All the NE pair instances which co-occur separated by at most 4 chunks are collected along with information about their NE types and the phrase between the NEs (the ‘context’).
These clusters are computed using an SVD variant without relying on transitional structure.
0
Recent work has made significant progress on unsupervised POS tagging (Me´rialdo, 1994; Smith and Eisner, 2005; Haghighi and Klein, 2006; Johnson,2007; Goldwater and Griffiths, 2007; Gao and John son, 2008; Ravi and Knight, 2009).
Their empirical results demonstrate that the type-based tagger rivals state-of-the-art tag-level taggers which employ more sophisticated learning mechanisms to exploit similar constraints.
0
We train and test on the CoNLL-X training set.
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement.
0
This is not to say that a set of standards by which a particular segmentation would count as correct and another incorrect could not be devised; indeed, such standards have been proposed and include the published PRCNSC (1994) and ROCLING (1993), as well as the unpublished Linguistic Data Consortium standards (ca.
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.
0
97 81.
It is probably the first analysis of Arabic parsing of this kind.
0
6. 3) all G o l d P O S 7 0 0.7 91 0.825 358 0.7 73 0.818 358 0.8 02 0.836 452 80.
All the texts were annotated by two people.
0
The motivation for our more informal approach was the intuition that there are so many open problems in rhetorical analysis (and more so for German than for English; see below) that the main task is qualitative investigation, whereas rigorous quantitative analyses should be performed at a later stage.
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models.
0
Figure 5 shows how this model is implemented as part of the dictionary WFST.
It is well-known that English constituency parsing models do not generalize to other languages and treebanks.
0
Crucially, the conventional orthographic form of MSA text is unvocalized, a property that results in a deficient graphical representation.
An extended lexicon model is defined, and its likelihood is compared to a baseline lexicon model, which takes only single-word dependencies into account.
0
The Levenshtein distance between the automatic translation and each of the reference translations is computed, and the minimum Levenshtein distance is taken.
However, these approaches are ill-equipped to directly represent type-based constraints such as sparsity.
0
In contrast, NNP (proper nouns) form a large portion of vocabulary.
Throughout this paper we compare with several packages: SRILM 1.5.12 (Stolcke, 2002) is a popular toolkit based on tries used in several decoders.
0
IRSTLM and BerkeleyLM use this state function (and a limit of N −1 words), but it is more strict than necessary, so decoders using these packages will miss some recombination opportunities.
BABAR's performance in both domains of terrorism and natural disaster, and the contextual-role knowledge in pronouns have shown successful results.
0
-1 means that an NP should be ruled out as a possible antecedent, and 0 means that the knowledge source remains neutral (i.e., it has no reason to believe that they cannot be coreferent).
The main result is that the combined system can recover non-projective dependencies with a precision sufficient to give a significant improvement in overall parsing accuracy, especially with respect to the exact match criterion, leading to the best reported performance for robust non-projective parsing of Czech.
0
However, since we want to preserve as much of the original structure as possible, we are interested in finding a transformation that involves a minimal number of lifts.
The TRIE model uses less memory than the smallest lossless alternative and is still faster than SRILM.
0
The structure uses linear probing hash tables and is designed for speed.
The authors believe automatic paraphrase discovery is an important component for building a fully automatic information extraction system.
0
In total, across all domains, we kept 13,976 phrases with keywords.
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement.
0
76 16.
For the perplexity and translation tasks, we used SRILM to build a 5-gram English language model on 834 million tokens from Europarl v6 (Koehn, 2005) and the 2011 Workshop on Machine Translation News Crawl corpus with duplicate lines removed.
0
The first value reports use immediately after loading while the second reports the increase during scoring. dBerkeleyLM is written in Java which requires memory be specified in advance.
In this paper the author evaluates machine translation performance for six European language pairs that participated in a shared task: translating French, German, Spanish texts to English and back.
0
This is can not be the only explanation, since the discrepancy still holds, for instance, for out-of-domain French-English, where Systran receives among the best adequacy and fluency scores, but a worse BLEU score than all but one statistical system.
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.
0
Our baseline for all sentence lengths is 5.23% F1 higher than the best previous result.
They proposed a single joint model for performing both morphological segmentation and syntactic disambiguation which bypasses the associated circularity.
0
Better grammars are shown here to improve performance on both morphological and syntactic tasks, providing support for the advantage of a joint framework over pipelined or factorized ones.
Throughout this paper we compare with several packages: SRILM 1.5.12 (Stolcke, 2002) is a popular toolkit based on tries used in several decoders.
0
This logic applies recursively: if wnf+1 similarly does not extend and has zero log backoff, it too should be omitted, terminating with a possibly empty context.
The authors cluster NE instance pairs based on the words in the context using bag-of-words methods.
0
Instead, we focused on phrases and set the frequency threshold to 2, and so were able to utilize a lot of phrases while minimizing noise.
The bias of automatic methods in favour of statistical systems seems to be less pronounced on out-of-domain test data.
0
The BLEU score has been shown to correlate well with human judgement, when statistical machine translation systems are compared (Doddington, 2002; Przybocki, 2004; Li, 2005).
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text.
0
This Good­ Turing estimate of p(unseen(f,) If,) can then be used in the normal way to define the probability of finding a novel instance of a construction in ir, in a text: p(unseen(f,)) = p(unseen(f,) I f,) p(fn Here p(ir,) is just the probability of any construction in ft as estimated from the frequency of such constructions in the corpus.
They showed better grammars to improve performance on both morphological and syntactic tasks, providing support for the advantage of a joint framework over pipelined or factorized ones.
0
Tsarfaty (2006) argues that for Semitic languages determining the correct morphological segmentation is dependent on syntactic context and shows that increasing information sharing between the morphological and the syntactic components leads to improved performance on the joint task.
They employed a PCFG-based generative framework to make both syntactic and morphological disambiguation decisions which were not only theoretically clean and linguistically justified but also probabilistically appropriate and empirically sound.
0
When the same token is to be interpreted as a single lexeme fmnh, it may function as a single adjective “fat”.
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers.
0
where the husband's family name is optionally prepended to the woman's full name; thus ;f:*lf#i xu3lin2-yan2hai3 would represent the name that Ms. Lin Yanhai would take if she married someone named Xu.
They employed a PCFG-based generative framework to make both syntactic and morphological disambiguation decisions which were not only theoretically clean and linguistically justified but also probabilistically appropriate and empirically sound.
0
Finally, model GTv = 2 includes parent annotation on top of the various state-splits, as is done also in (Tsarfaty and Sima’an, 2007; Cohen and Smith, 2007).
This paper presents a maximum entropy-based named entity recognizer (NER).
0
This is because in many cases, the use of hyphens can be considered to be optional (e.g., third-quarter or third quarter).
Explanations for this phenomenon are relative informativeness of lexicalization, insensitivity to morphology and the effect of variable word order and these factors lead to syntactic disambiguation.
0
We report micro-averaged (whole corpus) and macro-averaged (per sentence) scores along add a constraint on the removal of punctuation, which has a single tag (PUNC) in the ATB.
A large number of rules are needed for coverage of the domain, suggesting that a fairly large number of labeled examples should be required to train a classifier.
0
The DL-CoTrain algorithm can be motivated as being a greedy method of satisfying the above 2 constraints.
They focused on phrases which two Named Entities, and proceed in two stages.
0
The similar explanation applies to the link to the “stake” set.
An extended lexicon model is defined, and its likelihood is compared to a baseline lexicon model, which takes only single-word dependencies into account.
0
vierten 12.
However, these approaches are ill-equipped to directly represent type-based constraints such as sparsity.
0
Here, we conThis model is equivalent to the standard HMM ex cept that it enforces the one-word-per-tag constraint.
Nevertheless, only a part of this corpus (10 texts), which the authors name "core corpus", is annotated with all this information.
0
The government has to make a decision, and do it quickly.
Their work is closely related to recent approaches that incorporate the sparsity constraint into the POS induction process.
0
We can only compare with Grac¸a et al.
This paper talks about KenLM: Faster and Smaller Language Model Queries.
0
We use two common techniques, hash tables and sorted arrays, describing each before the model that uses the technique.
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models.
0
Similarly, there is no compelling evidence that either of the syllables of f.ifflll binllang2 'betelnut' represents a morpheme, since neither can occur in any context without the other: more likely fjfflll binllang2 is a disyllabic morpheme.
Due to many similarly performing systems, the author was not able to draw strong conclusions on the question of correlation of manual and automatic evaluation metrics.
0
However, a recent study (Callison-Burch et al., 2006), pointed out that this correlation may not always be strong.
This paper talks about Unsupervised Models for Named Entity Classification.
0
An edge indicates that the two features must have the same label.
Throughout this paper we compare with several packages: SRILM 1.5.12 (Stolcke, 2002) is a popular toolkit based on tries used in several decoders.
0
Backoff-smoothed models estimate this probability based on the observed entry with longest matching history wnf , returning where the probability p(wn|wn−1 f ) and backoff penalties b(wn−1 i ) are given by an already-estimated model.
Here we present two algorithms.
0
Given parameter estimates, the label for a test example x is defined as We should note that the model in equation 9 is deficient, in that it assigns greater than zero probability to some feature combinations that are impossible.
They used graph-based label propagation for cross-lingual knowledge transfer and used the projected labels as features in an unsupervised model.
0
In other words, the set of hidden states F was chosen to be the fine set of treebank tags.
In this work, they take a more direct approach and treat a word type and its allowed POS tags as a primary element of the model.
0
(2009), who also incorporate a sparsity constraint, but does via altering the model objective using posterior regularization.
They focused on phrases which two Named Entities, and proceed in two stages.
0
Hereafter, each pair of NE categories will be called a domain; e.g. the “Company – Company” domain, which we will call CC- domain (Step 2).
The TRIE model uses less memory than the smallest lossless alternative and is still faster than SRILM.
0
In all of our experiments, the binary file (whether mapped or, in the case of most other packages, interpreted) is loaded into the disk cache in advance so that lazy mapping will never fault to disk.
Using a treebank grammar, the data-driven lexicon, and a linguistically motivated unknown-tokens handling technique their model outperformed previous pipelined, integrated or factorized systems for Hebrew morphological and syntactic processing, yielding an error reduction of 12% over the best published results so far.
0
This is akin to PoS tags sequences induced by different parses in the setup familiar from English and explored in e.g.
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words.
0
attaching to terms denoting human beings.
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement.
0
The horizontal axis in this plot represents the most significant dimension, which explains 62% of the variation.
They showed the efficacy of graph-based label propagation for projecting part-of-speech information across languages.
0
Since our graph is built from a parallel corpus, we can use standard word alignment techniques to align the English sentences De 5Note that many combinations are impossible giving a PMI value of 0; e.g., when the trigram type and the feature instantiation don’t have words in common. and their foreign language translations Df.6 Label propagation in the graph will provide coverage and high recall, and we therefore extract only intersected high-confidence (> 0.9) alignments De�f.
There is no global pruning.
0
The search starts in the hypothesis (I; f;g; 0).
However, these approaches are ill-equipped to directly represent type-based constraints such as sparsity.
0
— similar results have been observed across multiple languages.
These clusters are computed using an SVD variant without relying on transitional structure.
0
These methods demonstrated the benefits of incorporating linguistic features using a log-linear parameterization, but requires elaborate machinery for training.
They showed better grammars to improve performance on both morphological and syntactic tasks, providing support for the advantage of a joint framework over pipelined or factorized ones.
0
The addition of vertical markovization enables non-pruned models to outperform all previously reported re12Cohen and Smith (2007) make use of a parameter (α) which is tuned separately for each of the tasks.
However, using the top-level semantic classes of WordNet proved to be problematic as the class distinctions are too coarse.
0
An important aspect of the DempsterShafer model is that it operates on sets of hypotheses.
On the basis of that observation, they described a class of formalisms which they called Linear Context- Free Rewriting Systems (LCFRs), and showed they were recognizable in polynomial time and generated only semilinear languages.
0
Independence of paths at this level reflects context freeness of rewriting and suggests why they can be recognized efficiently.
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration.
0
10 Chinese speakers may object to this form, since the suffix f, menD (PL) is usually restricted to.
Human judges also pointed out difficulties with the evaluation of long sentences.
0
However, it is also mostly political content (even if not focused on the internal workings of the European Union) and opinion.
The authors cluster NE instance pairs based on the words in the context using bag-of-words methods.
0
Hereafter, each pair of NE categories will be called a domain; e.g. the “Company – Company” domain, which we will call CC- domain (Step 2).
The first method builds on results from (Yarowsky 95) and (Blum and Mitchell 98).
0
The algorithm builds two classifiers iteratively: each iteration involves minimization of a continuously differential function which bounds the number of examples on which the two classifiers disagree.
The main result is that the combined system can recover non-projective dependencies with a precision sufficient to give a significant improvement in overall parsing accuracy, especially with respect to the exact match criterion, leading to the best reported performance for robust non-projective parsing of Czech.
0
First, the training data for the parser is projectivized by applying a minimal number of lifting operations (Kahane et al., 1998) and encoding information about these lifts in arc labels.
Their method did not assume any knowledge about the target language, making it applicable to a wide array of resource-poor languages.
0
Instead, we resort to an iterative update based method.
The overall parsing accuracy obtained with the pseudo-projective approach is still lower than for the best projective parsers.
0
By applying an inverse transformation to the output of the parser, arcs with non-standard labels can be lowered to their proper place in the dependency graph, giving rise 1The dependency graph has been modified to make the final period a dependent of the main verb instead of being a dependent of a special root node for the sentence. to non-projective structures.
However, these approaches are ill-equipped to directly represent type-based constraints such as sparsity.
0
Uniform Tag Prior (1TW) Our initial lexicon component will be uniform over possible tag assignments as well as word types.
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words.
0
In the case of adverbial reduplication illustrated in (3b) an adjective of the form AB is reduplicated as AABB.
The approach assumes that the word reordering is restricted to a few positions in the source sentence.
0
We apply a beam search concept as in speech recognition.
This paper offers a broad insight into of Arabic constituency parsing by analyzing the interplay of linguistic phenomena, annotation choices, and model design.
0
(2008) reported agreement between the teams (measured with Evalb) at 93.8% F1, the level of the CTB.
Their empirical results demonstrate that the type-based tagger rivals state-of-the-art tag-level taggers which employ more sophisticated learning mechanisms to exploit similar constraints.
0
0 70.9 42.
The authors show that PATB is similar to other tree-banks but that annotation consistency remains low.
0
Arabic is a morphologically rich language with a root-and-pattern system similar to other Semitic languages.
Combining multiple highly-accurate independent parsers yields promising results.
0
One can trivially create situations in which strictly binary-branching trees are combined to create a tree with only the root node and the terminal nodes, a completely flat structure.
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs.
0
Most computational models for coreference resolution rely on properties of the anaphor and candidate antecedent, such as lexical matching, grammatical and syntactic features, semantic agreement, and positional information.
Replacing this with a ranked evaluation seems to be more suitable.
0
Participants were also provided with two sets of 2,000 sentences of parallel text to be used for system development and tuning.
While many systems had similar performance, the results offered interesting insights, especially, about the relative performance of statistical and rule-based systems.
0
Systran submitted their commercial rule-based system that was not tuned to the Europarl corpus.
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models.
0
gaolxing4 'happy' => F.i'JF.i'J Jl!
Replacing this with a ranked evaluation seems to be more suitable.
0
We asked participants to each judge 200–300 sentences in terms of fluency and adequacy, the most commonly used manual evaluation metrics.
BABAR's performance in both domains of terrorism and natural disaster, and the contextual-role knowledge in pronouns have shown successful results.
0
For IE, the system must be able to distinguish between semantically similar noun phrases that play different roles in an event.
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models.
0
Papers that use this method or minor variants thereof include Liang (1986), Li et al.
This paper presents a maximum entropy-based named entity recognizer (NER).
0
In order to train this error-correction model, he divided his training corpus into 5 portions of 20% each.
These clusters are computed using an SVD variant without relying on transitional structure.
0
On several languages, we report performance exceeding that of more complex state-of-the art systems.1
The bias of automatic methods in favour of statistical systems seems to be less pronounced on out-of-domain test data.
0
For more on the participating systems, please refer to the respective system description in the proceedings of the workshop.
The TRIE model uses less memory than the smallest lossless alternative and is still faster than SRILM.
0
For 2 < n < N, we use a hash table mapping from the n-gram to the probability and backoff3.