source_text
stringlengths
27
368
label
int64
0
1
target_text
stringlengths
1
5.38k
Across eight European languages, their approach results in an average absolute improvement of 10.4% over a state-of-the-art baseline, and 16.7% over vanilla hidden Markov models induced with the Expectation Maximization algorithm.
0
The foreign language vertices (denoted by Vf) correspond to foreign trigram types, exactly as in Subramanya et al. (2010).
The code is open source, has minimal dependencies, and offers both C++ and Java interfaces for integration.
0
Reading the following record’s offset indicates where the block ends.
This topic has been getting more attention, driven by the needs of various NLP applications.
0
It is not easy to make a clear definition of “paraphrase”.
On the basis of that observation, they described a class of formalisms which they called Linear Context- Free Rewriting Systems (LCFRs), and showed they were recognizable in polynomial time and generated only semilinear languages.
0
The equivalence of CC's with this restriction to TAG's and HG's is, however, still an open problem.
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance.
0
A totally non­ stochastic rule-based system such as Wang, Li, and Chang's will generally succeed in such cases, but of course runs the risk of overgeneration wherever the single-hanzi word is really intended.
This paper offers a broad insight into of Arabic constituency parsing by analyzing the interplay of linguistic phenomena, annotation choices, and model design.
0
In these experiments, the input lacks segmentation markers, hence the slightly different dev set baseline than in Table 6.
Due to many similarly performing systems, they are not able to draw strong conclusions on the question of correlation of manual and automatic evaluation metrics.
0
We divide up each test set into blocks of 20 sentences (100 blocks for the in-domain test set, 53 blocks for the out-of-domain test set), check for each block, if one system has a higher BLEU score than the other, and then use the sign test.
They plan on extending instance-weighting to other standard SMT components and capture the degree of generality of phrase pairs.
0
For comparison to information-retrieval inspired baselines, eg (L¨u et al., 2007), we select sentences from OUT using language model perplexities from IN.
Explanations for this phenomenon are relative informativeness of lexicalization, insensitivity to morphology and the effect of variable word order and these factors lead to syntactic disambiguation.
0
However, MADA is language-specific and relies on manually constructed dictionaries.
This corpus has several advantages: it is annotated at different levels.
0
Since 170 annotated texts constitute a fairly small training set, Reitter found that an overall recognition accuracy of 39% could be achieved using his method.
This assumption, however, is not inherent to type-based tagging models.
0
Even without features, but still using the tag prior, our median result is 52.0%, still significantly outperforming Grac¸a et al.
The overall performance of their joint framework demonstrated that a probability distribution obtained over mere syntactic contexts using a Treebank grammar and a data-driven lexicon outperformed upper bounds proposed by previous joint disambiguation systems and achieved segmentation and parsing results on a par with state-of-the-art standalone applications results.
0
While the linear precedence of segmental morphemes within a token is subject to constraints, the dominance relations among their mother and sister constituents is rather free.
Two general approaches are presented and two combination techniques are described for each approach.
0
Call the crossing constituents A and B.
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance.
0
2.
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement.
0
The use of the Good-Turing equation presumes suitable estimates of the unknown expectations it requires.
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs.
0
ments contained 322 anaphoric links.
In order to create good-sized vectors for similarity calculation, they had to set a high frequency threshold.
0
Step 3.
This assumption, however, is not inherent to type-based tagging models.
0
i=1 (f,v)∈Wi
The use of global features has shown excellent result in the performance on MUC-6 and MUC-7 test data.
0
We have used the Java-based opennlp maximum entropy package1.
Each out-of-domain phrase pair was characterized by a set of simple features intended to reflect how useful it would be.
0
We carried out translation experiments in two different settings.
This paper talks about KenLM: Faster and Smaller Language Model Queries.
0
The same numbers were used for each data structure.
We chose one of four labels for each example: location, person, organization, or noise where the noise category was used for items that were outside the three categories.
0
The final strong hypothesis, denoted 1(x), is then the sign of a weighted sum of the weak hypotheses, 1(x) = sign (Vii atht(x)), where the weights at are determined during the run of the algorithm, as we describe below.
We chose one of four labels for each example: location, person, organization, or noise where the noise category was used for items that were outside the three categories.
0
Taking only the highest frequency rules is much "safer", as they tend to be very accurate.
The authors believe automatic paraphrase discovery is an important component for building a fully automatic information extraction system.
0
As can be seen in the example, the first two phrases have a different order of NE names from the last two, so we can determine that the last two phrases represent a reversed relation.
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models.
0
Table 3 Classes of words found by ST for the test corpus.
Evaluation was done automatically using the BLEU score and manually on fluency and adequacy.
0
However, ince we extracted the test corpus automatically from web sources, the reference translation was not always accurate — due to sentence alignment errors, or because translators did not adhere to a strict sentence-by-sentence translation (say, using pronouns when referring to entities mentioned in the previous sentence).
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.
0
5.1 Parsing Models.
Human judges also pointed out difficulties with the evaluation of long sentences.
0
The normalized judgement per judge is the raw judgement plus (3 minus average raw judgement for this judge).
The authors cluster NE instance pairs based on the words in the context using bag-of-words methods.
0
This limits the number of NE category pairs to 2,000 and the number of NE pair instances to 0.63 million.
They have made use of local and global features to deal with the instances of same token in a document.
0
of Articles No.
They have made use of local and global features to deal with the instances of same token in a document.
0
Except our own and MENE + reference resolution, the results in Table 6 are all official MUC7 results.
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration.
0
We of course also fail to identify, by the methods just described, given names used without their associated family name.
The code is open source, has minimal dependencies, and offers both C++ and Java interfaces for integration.
0
Juri Ganitkevitch answered questions about Joshua.
The authors cluster NE instance pairs based on the words in the context using bag-of-words methods.
0
We will return to these issues in the discussion section.
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers.
0
The cost estimate, cost(i¥JJ1l.fn is computed in the obvious way by summing the negative log probabilities of i¥JJ1l.
In this work, they take a more direct approach and treat a word type and its allowed POS tags as a primary element of the model.
0
1 1 0.
The authors show that PATB is similar to other tree-banks but that annotation consistency remains low.
0
Let us consider an example of ambiguity caused by devocalization.
They extended previous work on discriminative weighting by using a finer granularity, focusing on the properties of instances rather than corpus components, and used simpler training procedure.
0
The corpus was wordaligned using both HMM and IBM2 models, and the phrase table was the union of phrases extracted from these separate alignments, with a length limit of 7.
The bias of automatic methods in favour of statistical systems seems to be less pronounced on out-of-domain test data.
0
In-domain Systran scores on this metric are lower than all statistical systems, even the ones that have much worse human scores.
They employed a PCFG-based generative framework to make both syntactic and morphological disambiguation decisions which were not only theoretically clean and linguistically justified but also probabilistically appropriate and empirically sound.
0
Many morphological decisions are based on long distance dependencies, and when the global syntactic evidence disagrees with evidence based on local linear context, the two models compete with one another, despite the fact that the PCFG takes also local context into account.
The contextual rules are restricted and may not be applicable to every example, but the spelling rules are generally applicable and should have good coverage.
0
For a description of the application of AdaBoost to various NLP problems see the paper by Abney, Schapire, and Singer in this volume.
The overall parsing accuracy obtained with the pseudo-projective approach is still lower than for the best projective parsers.
0
In the third and final scheme, denoted Path, we keep the extra infor2Note that this is a baseline for the parsing experiment only (Experiment 2).
Nevertheless, only a part of this corpus (10 texts), which the authors name "core corpus", is annotated with all this information.
0
When the signal is a coordinating conjunction, the second span is usually the clause following the conjunction; the first span is often the clause preceding it, but sometimes stretches further back.
They proposed an unsupervised method to discover paraphrases from a large untagged corpus.
0
We will describe the evaluation of such clusters in the next subsection.
The authors believe automatic paraphrase discovery is an important component for building a fully automatic information extraction system.
0
As lower frequency examples include noise, we set a threshold that an NE category pair should appear at least 5 times to be considered and an NE instance pair should appear at least twice to be considered.
NER is useful in many NLP applications such as information extraction, question answering, etc .Chieu and Ng have shown that the maximum entropy framework is able to use global information directly from various sources.
0
In this section, we try to compare our results with those obtained by IdentiFinder ' 97 (Bikel et al., 1997), IdentiFinder ' 99 (Bikel et al., 1999), and MENE (Borthwick, 1999).
While many systems had similar performance, the results offered interesting insights, especially, about the relative performance of statistical and rule-based systems.
0
If two systems’ scores are close, this may simply be a random effect in the test data.
While many systems had similar performance, the results offered interesting insights, especially, about the relative performance of statistical and rule-based systems.
0
However, a recent study (Callison-Burch et al., 2006), pointed out that this correlation may not always be strong.
The first method builds on results from (Yarowsky 95) and (Blum and Mitchell 98).
0
The DL-CoTrain algorithm can be motivated as being a greedy method of satisfying the above 2 constraints.
The approach assumes that the word reordering is restricted to a few positions in the source sentence.
0
Kollege.
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names.
0
We have shown that, at least given independent human judgments, this is not the case, and that therefore such simplistic measures should be mistrusted.
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.
0
This feature has a linguistic justification.
The overall performance of their joint framework demonstrated that a probability distribution obtained over mere syntactic contexts using a Treebank grammar and a data-driven lexicon outperformed upper bounds proposed by previous joint disambiguation systems and achieved segmentation and parsing results on a par with state-of-the-art standalone applications results.
0
For all grammars, we use fine-grained PoS tags indicating various morphological features annotated therein.
However, these approaches are ill-equipped to directly represent type-based constraints such as sparsity.
0
(2009), who also incorporate a sparsity constraint, but does via altering the model objective using posterior regularization.
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text.
0
Both of these analyses are shown in Figure 4; fortunately, the correct analysis is also the one with the lowest cost, so it is this analysis that is chosen.
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.
0
In the initial release of the ATB, inter-annotator agreement was inferior to other LDC treebanks (Maamouri et al., 2008).
In this paper, Ben and Riloff present a coreference resolver called BABAR that focuses on the use of contextual-role knowledge for coreference resolution.
0
A different knowledge source, called CFSemCFSem, compares the semantic expectations of the caseframe that extracts the anaphor with the semantic expectations of the caseframe that extracts the candidate.
The TRIE model uses less memory than the smallest lossless alternative and is still faster than SRILM.
0
The developers aimed to reduce memory consumption at the expense of time.
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text.
0
com §Cambridge, UK Email: [email protected] © 1996 Association for Computational Linguistics (a) B ) ( , : & ; ? ' H o w d o y o u s a y o c t o p u s i n J a p a n e s e ? ' (b) P l a u s i b l e S e g m e n t a t i o n I B X I I 1 : & I 0 0 r i 4 w e n 2 z h a n g l y u 2 z e n 3 m e 0 s h u o l ' J a p a n e s e ' ' o c t o p u s ' ' h o w ' ' s a y ' (c) Figure 1 I m p l a u s i b l e S e g m e n t a t i o n [§] lxI 1:&I ri4 wen2 zhangl yu2zen3 me0 shuol 'Japan' 'essay' 'fish' 'how' 'say' A Chinese sentence in (a) illustrating the lack of word boundaries.
This paper discusses the Potsdam Commentary Corpus, a corpus of german assembeled by potsdam university.
0
The portions of information in the large window can be individually clicked visible or invisible; here we have chosen to see (from top to bottom) • the full text, • the annotation values for the activated annotation set (co-reference), • the actual annotation tiers, and • the portion of text currently ‘in focus’ (which also appears underlined in the full text).
This corpus has several advantages: it is annotated at different levels.
0
When finished, the whole material is written into an XML-structured annotation file.
BABAR's performance in both domains of terrorism and natural disaster, and the contextual-role knowledge in pronouns have shown successful results.
0
In this section, we describe how contextual role knowledge is represented and learned.
The experimental tests are carried out on the Verbmobil task which is a limited-domain spoken-language task.
0
Note that in line 4 the last visited position for the successor hypothesis must be m. Otherwise , there will be four uncovered positions for the predecessor hypothesis violating the restriction.
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers.
0
For E(ni1s), then, we substitute a smooth S against the number of class elements.
The resulting model is compact, efficiently learnable and linguistically expressive.
0
Past work however, has typically associ n = n P (Ti)P (Wi|Ti) = i=1 1 n K n ated these features with token occurrences, typically in an HMM.
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.
0
64 76.
They extended previous work on discriminative weighting by using a finer granularity, focusing on the properties of instances rather than corpus components, and used simpler training procedure.
0
We have not yet tried this.
This paper discusses the Potsdam Commentary Corpus, a corpus of german assembeled by potsdam university.
0
The annotator can then “click away” those words that are here not used as connectives (such as the conjunction und (‘and’) used in lists, or many adverbials that are ambiguous between connective and discourse particle).
The second algorithm builds on a boosting algorithm called AdaBoost.
0
For each label (Per s on, organization and Location), take the n contextual rules with the highest value of Count' (x) whose unsmoothed3 strength is above some threshold pmin.
Vijay-Shankar et all considered the structural descriptions produced by various grammatical formalisms in terms of the complexity of the paths and the relationship between paths in the sets of structural descriptions that each system can generate.
0
Having defined LCFRS's, in Section 4.2 we established the semilinearity (and hence constant growth property) of the languages generated.
They proposed a single joint model for performing both morphological segmentation and syntactic disambiguation which bypasses the associated circularity.
0
In our model, however, all lattice paths are taken to be a-priori equally likely.
The overall parsing accuracy obtained with the pseudo-projective approach is still lower than for the best projective parsers.
0
There exist a few robust broad-coverage parsers that produce non-projective dependency structures, notably Tapanainen and J¨arvinen (1997) and Wang and Harper (2004) for English, Foth et al. (2004) for German, and Holan (2004) for Czech.
They showed better grammars to improve performance on both morphological and syntactic tasks, providing support for the advantage of a joint framework over pipelined or factorized ones.
0
A Single Generative Model for Joint Morphological Segmentation and Syntactic Parsing
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.
0
11 taTweel (-) is an elongation character used in Arabic script to justify text.
In this work, they take a more direct approach and treat a word type and its allowed POS tags as a primary element of the model.
0
For other languages, we use the CoNLL-X multilingual dependency parsing shared task corpora (Buchholz and Marsi, 2006) which include gold POS tags (used for evaluation).
The first method builds on results from (Yarowsky 95) and (Blum and Mitchell 98).
0
They also describe an application of cotraining to classifying web pages (the to feature sets are the words on the page, and other pages pointing to the page).
They showed better grammars to improve performance on both morphological and syntactic tasks, providing support for the advantage of a joint framework over pipelined or factorized ones.
0
Our experimental setup therefore is designed to serve two goals.
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers.
0
This suggests that the backoff model is as reasonable a model as we can use in the absence of further information about the expected cost of a plural form.
The code is open source, has minimal dependencies, and offers both C++ and Java interfaces for integration.
0
When two partial hypotheses have equal state (including that of other features), they can be recombined and thereafter efficiently handled as a single packed hypothesis.
This paper talks about KenLM: Faster and Smaller Language Model Queries.
0
Because b is a function, no additional hypothesis splitting happens.
However, using the top-level semantic classes of WordNet proved to be problematic as the class distinctions are too coarse.
0
Se ma nti c (a) filters candidate if its semantic tags d o n ’ t i n t e r s e c t w i t h t h o s e o f t h e a n a p h o r .
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names.
0
Note that the backoff model assumes that there is a positive correlation between the frequency of a singular noun and its plural.
They showed that it was useful to abstract away from the details of the formalism, and examine the nature of their derivation process as reflected by properties their trees, find that several of the formalisms considered can be seen as being closely related since they have derivation tree sets with the same structure as those produced by Context-Free Grammars.
0
This allows an unbounded amount of information about two separate paths (e.g. an encoding of their length) to be combined and used to influence the later derivation.
Each out-of-domain phrase pair was characterized by a set of simple features intended to reflect how useful it would be.
0
Compared to the EMEA/EP setting, the two domains in the NIST setting are less homogeneous and more similar to each other; there is also considerably more IN text available.
In this paper, Ben and Riloff present a coreference resolver called BABAR that focuses on the use of contextual-role knowledge for coreference resolution.
0
We considered using the MUC6 and MUC7 data sets, but their training sets were far too small to learn reliable co-occurrence statistics for a large set of contextual role relationships.
The problem of coreference resolution has received considerable attention, including theoretical discourse models and supervised machine learning systems.
0
3 The Coreference Resolution Model.
An extended lexicon model is defined, and its likelihood is compared to a baseline lexicon model, which takes only single-word dependencies into account.
0
Table 5: Effect of the beam threshold on the number of search errors (147 sentences).
They showed the efficacy of graph-based label propagation for projecting part-of-speech information across languages.
0
Second, we treat the projected labels as features in an unsupervised model (§5), rather than using them directly for supervised training.
We chose one of four labels for each example: location, person, organization, or noise where the noise category was used for items that were outside the three categories.
0
We again adopt an approach where we alternate between two classifiers: one classifier is modified while the other remains fixed.
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement.
0
Under this scheme, n human judges are asked independently to segment a text.
In order to realize the full potential of dependency-based syntactic parsing, it is desirable to allow non-projective dependency structures.
0
First, in section 4, we evaluate the graph transformation techniques in themselves, with data from the Prague Dependency Treebank and the Danish Dependency Treebank.
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance.
0
This class-based model gives reasonable results: for six radical classes, Table 1 gives the estimated cost for an unseen hanzi in the class occurring as the second hanzi in a double GIVEN name.
Instance-weighting approach improved over a wide range of baselines, giving gains of over 2 BLEU points over the best non-adapted baseline.
0
Dropping the conditioning on 0 for brevity, and letting ¯cλ(s, t) = cλ(s, t) + yu(s|t), and ¯cλ(t) = 4Note that the probabilities in (7) need only be evaluated over the support of ˜p(s, t), which is quite small when this distribution is derived from a dev set.
This paper talks about Unsupervised Models for Named Entity Classification.
0
The weak learner for two-class problems computes a weak hypothesis h from the input space into the reals (h : 2x -4 R), where the sign4 of h(x) is interpreted as the predicted label and the magnitude I h(x)I is the confidence in the prediction: large numbers for I h(x)I indicate high confidence in the prediction, and numbers close to zero indicate low confidence.
This paper talks about Unsupervised Models for Named Entity Classification.
0
In our experiments we set the parameter values randomly, and then ran EM to convergence.
Finally, several coreference systems have successfully incorporated anaphoricity determination modules.
0
For definite NPs, the results are a mixed bag: some knowledge sources increased recall a little, but at the expense of some precision.
They used graph-based label propagation for cross-lingual knowledge transfer and used the projected labels as features in an unsupervised model.
0
Our monolingual similarity function (for connecting pairs of foreign trigram types) is the same as the one used by Subramanya et al. (2010).
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement.
0
Consider first the examples in (2).
This paper conducted research in the area of automatic paraphrase discovery.
0
We will return to these issues in the discussion section.
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs.
0
The first step in the learning process is to generate training examples consisting of anaphor/antecedent resolutions.