source_text
stringlengths
27
368
label
int64
0
1
target_text
stringlengths
1
5.38k
In this paper, the authors proposed an approach for instance-weighting phrase pairs in an out-of-domain corpus in order to improve in-domain performance.
0
The logistic function, whose outputs are in [0, 1], forces pp(s, t) <_ po(s, t).
This topic has been getting more attention, driven by the needs of various NLP applications.
0
As you can see in the figure, the accuracy for the domain is quite high except for the “agree” set, which contains various expressions representing different relationships for an IE application.
In this paper, Ben and Riloff present a coreference resolver called BABAR that focuses on the use of contextual-role knowledge for coreference resolution.
0
6 Our knowledge sources return some sort of probability estimate, although in some cases this estimate is not especially well-principled (e.g., the Recency KS).
For the perplexity and translation tasks, we used SRILM to build a 5-gram English language model on 834 million tokens from Europarl v6 (Koehn, 2005) and the 2011 Workshop on Machine Translation News Crawl corpus with duplicate lines removed.
0
Given a key k, it estimates the position If the estimate is exact (A[pivot] = k), then the algorithm terminates succesfully.
From a DP-based solution to the traveling salesman problem, they present a novel technique to restrict the possible word reordering between source and target language in order to achieve an eÆcient search algorithm.
0
(f1; ;mg n fl1g ; l) 3 (f1; ;mg n fl; l1; l2g ; l0) !
This paper offers a broad insight into of Arabic constituency parsing by analyzing the interplay of linguistic phenomena, annotation choices, and model design.
0
The content does not necessarily reflect the views of the U.S. Government, and no official endorsement should be inferred.
This topic has been getting more attention, driven by the needs of various NLP applications.
0
The larger sets are more accurate than the small sets.
It is probably the first analysis of Arabic parsing of this kind.
0
If we remove this sample from the evaluation, then the ATB type-level error rises to only 37.4% while the n-gram error rate increases to 6.24%.
The contextual rules are restricted and may not be applicable to every example, but the spelling rules are generally applicable and should have good coverage.
0
The final strong hypothesis, denoted 1(x), is then the sign of a weighted sum of the weak hypotheses, 1(x) = sign (Vii atht(x)), where the weights at are determined during the run of the algorithm, as we describe below.
The corpus was annoted with different linguitic information.
0
Consequently, we implemented our own annotation tool ConAno in Java (Stede, Heintze 2004), which provides specifically the functionality needed for our purpose.
This paper conducted research in the area of automatic paraphrase discovery.
0
We use a simple TF/IDF method to measure the topicality of words.
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance.
0
example, in Northern Mandarin dialects there is a morpheme -r that attaches mostly to nouns, and which is phonologically incorporated into the syllable to which it attaches: thus men2+r (door+R) 'door' is realized as mer2.
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models.
0
Lexical-knowledge-based approaches that include statistical information generally presume that one starts with all possible segmentations of a sentence, and picks the best segmentation from the set of possible segmentations using a probabilistic or cost­ based scoring mechanism.
The three parsers were trained and tuned by their creators on various sections of the WSJ portion of the Penn Treebank.
0
In the cases where isolated constituent precision is larger than 0.5 the affected portion of the hypotheses is negligible.
We chose one of four labels for each example: location, person, organization, or noise where the noise category was used for items that were outside the three categories.
0
Future work should also extend the approach to build a complete named entity extractor - a method that pulls proper names from text and then classifies them.
Throughout this paper we compare with several packages: SRILM 1.5.12 (Stolcke, 2002) is a popular toolkit based on tries used in several decoders.
0
In line with perplexity results from Table 1, the PROBING model is the fastest followed by TRIE, and subsequently other packages.
In order to create good-sized vectors for similarity calculation, they had to set a high frequency threshold.
0
Extract NE pair instances with contexts From the four years of newspaper corpus, we extracted 1.9 million pairs of NE instances.
A large number of rules are needed for coverage of the domain, suggesting that a fairly large number of labeled examples should be required to train a classifier.
0
AdaBoost was first introduced in (Freund and Schapire 97); (Schapire and Singer 98) gave a generalization of AdaBoost which we will use in this paper.
They showed better grammars to improve performance on both morphological and syntactic tasks, providing support for the advantage of a joint framework over pipelined or factorized ones.
0
At this stage the lattice path corresponds to segments only, with no PoS assigned to them.
Nevertheless, only a part of this corpus (10 texts), which the authors name "core corpus", is annotated with all this information.
0
This is manifest in the lexical choices but 1 www.coli.unisb.de/∼thorsten/tnt/ Dagmar Ziegler is up to her neck in debt.
The TRIE model uses less memory than the smallest lossless alternative and is still faster than SRILM.
0
With a good hash function, collisions of the full 64bit hash are exceedingly rare: one in 266 billion queries for our baseline model will falsely find a key not present.
Vijay-Shankar et all considered the structural descriptions produced by various grammatical formalisms in terms of the complexity of the paths and the relationship between paths in the sets of structural descriptions that each system can generate.
0
Roughly speaking, we say that a tee set contains trees with dependent paths if there are two paths p., = vim., and g., = in each 7 E r such that v., is some, possibly empty, shared initial subpath; v., and wi are not bounded in length; and there is some &quot;dependence&quot; (such as equal length) between the set of all v., and w., for each 7 Er.
Their work is closely related to recent approaches that incorporate the sparsity constraint into the POS induction process.
0
However, in existing systems, this expansion come with a steep increase in model complexity.
Here we present two algorithms.
0
The test accuracy more or less asymptotes.
Through parser combination we have reduced the precision error rate by 30% and the recall error rate by 6% compared to the best previously published result.
0
Finally we show the combining techniques degrade very little when a poor parser is added to the set.
Explanations for this phenomenon are relative informativeness of lexicalization, insensitivity to morphology and the effect of variable word order and these factors lead to syntactic disambiguation.
0
31 75.
While many systems had similar performance, the results offered interesting insights, especially, about the relative performance of statistical and rule-based systems.
0
The statistical systems seem to still lag behind the commercial rule-based competition when translating into morphological rich languages, as demonstrated by the results for English-German and English-French.
It outperformed strong unsupervised baselines as well as approaches that relied on direct projections, and bridged the gap between purely supervised and unsupervised POS tagging models.
0
Our “Projection” baseline is able to benefit from the bilingual information and greatly improves upon the monolingual baselines, but falls short of the “No LP” model by 2.5% on an average.
They have made use of local and global features to deal with the instances of same token in a document.
0
(2) Description Source Location Names http://www.timeanddate.com http://www.cityguide.travel-guides.com http://www.worldtravelguide.net Corporate Names http://www.fmlx.com Person First Names http://www.census.gov/genealogy/names Person Last Names Table 2: Sources of Dictionaries The McCann family . . .
In this paper, the authors proposed an approach for instance-weighting phrase pairs in an out-of-domain corpus in order to improve in-domain performance.
0
We have not yet tried this.
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text.
0
Of course, we.
In order to handle the necessary word reordering as an optimization problem within the dynamic programming approach, they describe a solution to the traveling salesman problem (TSP) which is based on dynamic programming.
0
SSER: subjective sentence error rate: For a more detailed analysis, the translations are judged by a human test person.
This paper presents methods to query N-gram language models, minimizing time and space costs.
0
Sorted arrays store key-value pairs in an array sorted by key, incurring no space overhead.
The authors in this paper describe a search procedure for statistical machine translation (MT) based on dynamic programming (DP).
0
A modified language model probability pÆ(eje0; e00) is defined as follows: pÆ(eje0; e00) = 1:0 if Æ = 0 p(eje0; e00) if Æ = 1 : We associate a distribution p(Æ) with the two cases Æ = 0 and Æ = 1 and set p(Æ = 1) = 0:7.
Finally, several coreference systems have successfully incorporated anaphoricity determination modules.
0
By definition, each existential NP uniquely specifies an object or concept, so we can infer that all instances of the same existential NP are coreferent (e.g., “the FBI” always refers to the same entity).
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models.
0
The first concerns how to deal with ambiguities in segmentation.
They proposed an unsupervised method to discover paraphrases from a large untagged corpus.
0
Using structural information As was explained in the results section, we extracted examples like “Smith estimates Lotus”, from a sentence like “Mr.
In this paper the author evaluates machine translation performance for six European language pairs that participated in a shared task: translating French, German, Spanish texts to English and back.
0
Annotators suggested that long sentences are almost impossible to judge.
The authors show that PATB is similar to other tree-banks but that annotation consistency remains low.
0
Our analysis begins with a description of syntactic ambiguity in unvocalized MSA text (§2).
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers.
0
2.
This corpus has several advantages: it is annotated at different levels.
0
2.3 Rhetorical structure.
Throughout this paper we compare with several packages: SRILM 1.5.12 (Stolcke, 2002) is a popular toolkit based on tries used in several decoders.
0
Beyond optimizing the memory size of TRIE, there are alternative data structures such as those in Guthrie and Hepple (2010).
The approach has been successfully tested on the 8 000-word Verbmobil task.
0
For the inverted alignment probability p(bijbi􀀀1; I; J), we drop the dependence on the target sentence length I. 2.2 Word Joining.
They employed a PCFG-based generative framework to make both syntactic and morphological disambiguation decisions which were not only theoretically clean and linguistically justified but also probabilistically appropriate and empirically sound.
0
Such discrepancies can be aligned via an intermediate level of PoS tags.
The PROBING data structure uses linear probing hash tables and is designed for speed.
0
These are written to the state s(wn1) and returned so that they can be used for the following query.
The problem of coreference resolution has received considerable attention, including theoretical discourse models and supervised machine learning systems.
0
Given an anaphor and candidate, BABAR checks (1) whether the semantic classes of the anaphor intersect with the semantic expectations of the caseframe that extracts the candidate, and (2) whether the semantic classes of the candidate intersect with the semantic ex pectations of the caseframe that extracts the anaphor.
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.
0
Lattice parsing (Chappelier et al., 1999) is an alternative to a pipeline that prevents cascading errors by placing all segmentation options into the parse chart.
Through parser combination we have reduced the precision error rate by 30% and the recall error rate by 6% compared to the best previously published result.
0
Again we notice that the isolated constituent precision is larger than 0.5 only in those partitions that contain very few samples.
Das and Petrov, in this paper, approached inducing unsupervised part-of-speech taggers for languages that had no labeled training data, but had translated text in a resource-rich language.
0
As a result, its POS tag needs to be induced in the “No LP” case, while the 11A word level paired-t-test is significant at p < 0.01 for Danish, Greek, Italian, Portuguese, Spanish and Swedish, and p < 0.05 for Dutch. correct tag is available as a constraint feature in the “With LP” case.
NER is useful in many NLP applications such as information extraction, question answering, etc .Chieu and Ng have shown that the maximum entropy framework is able to use global information directly from various sources.
0
With the same Corporate- Suffix-List and Person-Prefix-List used in local features, for a token seen elsewhere in the same document with one of these suffixes (or prefixes), another feature Other-CS (or Other-PP) is set to 1.
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs.
0
A contextual role represents the role that a noun phrase plays in an event or relationship.
However, these approaches are ill-equipped to directly represent type-based constraints such as sparsity.
0
3 60.7 50.
In order to create good-sized vectors for similarity calculation, they had to set a high frequency threshold.
0
Here, “EG” represents “Eastern Group Plc”.
A beam search concept is applied as in speech recognition.
0
The approach recursively evaluates a quantity Q(C; j), where C is the set of already visited cities and sj is the last visited city.
The PROBING data structure uses linear probing hash tables and is designed for speed.
0
Figure 1 shows an example.
The corpus was annoted with different linguitic information.
0
The Potsdam Commentary Corpus
The authors believe automatic paraphrase discovery is an important component for building a fully automatic information extraction system.
0
D o m ai n # of ph ras es t o t a l p h r a s e s ac cu ra cy C C 7 o r m o r e 1 0 5 8 7 . 6 % 6 o r l e s s 1 0 6 6 7 . 0 % P C 7 o r m o r e 3 5 9 9 9 . 2 % 6 o r l e s s 2 5 5 6 5 . 1 % Table 1.
The approach assumes that the word reordering is restricted to a few positions in the source sentence.
0
Table 4 shows translation results for the three approaches.
In this paper, Ben and Riloff present a coreference resolver called BABAR that focuses on the use of contextual-role knowledge for coreference resolution.
0
Our results show that BABAR achieves good performance in both domains, and that the contextual role knowledge improves performance, especially on pronouns.
They proposed an unsupervised method to discover paraphrases from a large untagged corpus.
0
One of such approaches uses comparable documents, which are sets of documents whose content are found/known to be almost the same, such as different newspaper stories about the same event [Shinyama and Sekine 03] or different translations of the same story [Barzilay 01].
Here both parametric and non-parametric models are explored.
0
Through parser combination we have reduced the precision error rate by 30% and the recall error rate by 6% compared to the best previously published result.
It is probably the first analysis of Arabic parsing of this kind.
0
As a result, Habash et al.
The authors cluster NE instance pairs based on the words in the context using bag-of-words methods.
0
Evaluation results for links
Human judges also pointed out difficulties with the evaluation of long sentences.
0
While automatic measures are an invaluable tool for the day-to-day development of machine translation systems, they are only a imperfect substitute for human assessment of translation quality, or as the acronym BLEU puts it, a bilingual evaluation understudy.
In this paper, Ben and Riloff present a coreference resolver called BABAR that focuses on the use of contextual-role knowledge for coreference resolution.
0
4 Evaluation Results.
A beam search concept is applied as in speech recognition.
0
Again, the monotone search performs worst.
The authors show that PATB is similar to other tree-banks but that annotation consistency remains low.
0
37 84.
Finally, several coreference systems have successfully incorporated anaphoricity determination modules.
0
For each domain, we created a blind test set by manually annotating 40 doc uments with anaphoric chains, which represent sets of m3 (S) = ) X ∩Y =S 1 − ) m1 (X ) ∗ m2 (Y ) m1 (X ) ∗ m2 (Y ) (1) noun phrases that are coreferent (as done for MUC6 (MUC6 Proceedings, 1995)).
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text.
0
com t 700 Mountain Avenue, 2d451, Murray Hill, NJ 07974, USA.
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs.
0
The belief value that would have been assigned to the intersection of these sets is .60*.70=.42, but this belief has nowhere to go because the null set is not permissible in the model.7 So this probability mass (.42) has to be redistributed.
The AdaBoost algorithm was developed for supervised learning.
0
The new algorithm, which we call CoBoost, uses labeled and unlabeled data and builds two classifiers in parallel.
In order to create good-sized vectors for similarity calculation, they had to set a high frequency threshold.
0
The results, along with the total number of phrases, are shown in Table 1.
They found replacing it with a ranked evaluation to be more suitable.
0
We are currently working on a complete open source implementation of a training and decoding system, which should become available over the summer. pus, from which also the in-domain test set is taken.
The authors show that PATB is similar to other tree-banks but that annotation consistency remains low.
0
Tokens tagged as PUNC are not discarded unless they consist entirely of punctuation.
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.
0
But Rehbein and van Genabith (2007) showed that Evalb should not be used as an indication of real difference— or similarity—between treebanks.
Using less training data than other systems, their NER can perform as well as other state-of-the-art NERs.
0
However, their system is a hybrid of hand-coded rules and machine learning methods.
The authors in this paper describe a search procedure for statistical machine translation (MT) based on dynamic programming (DP).
0
2.
Finally, several coreference systems have successfully incorporated anaphoricity determination modules.
0
These systems rely on a training corpus that has been manually annotated with coreference links.
Finally, several coreference systems have successfully incorporated anaphoricity determination modules.
0
m(S) represents the belief that the correct hypothesis is included in S. The model assumes that evidence also arrives as a probability density function (pdf) over sets of hypotheses.6 Integrating new evidence into the existing model is therefore simply a matter of defining a function to merge pdfs, one representing the current belief system and one representing the beliefs of the new evidence.
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text.
0
(a) 1 § . ;m t 7 leO z h e 4 pil m a 3 lu 4 sh an g4 bi ng 4 t h i s CL (assi fier) horse w ay on sic k A SP (ec t) 'This horse got sick on the way' (b) 1§: . til y zhe4 tiao2 ma3lu4 hen3 shao3 this CL road very few 'Very few cars pass by this road' :$ chel jinglguo4 car pass by 2.
Nevertheless, only a part of this corpus (10 texts), which the authors name "core corpus", is annotated with all this information.
0
In order to evaluate and advance this approach, it helps to feed into the knowledge base data that is already enriched with some of the desired information — as in PCC.
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement.
0
We have not to date explored these various options.
Their results show that their high performance NER use less training data than other systems.
0
IdentiFinder ' 99' s results are considerably better than IdentiFinder ' 97' s. IdentiFinder' s performance in MUC7 is published in (Miller et al., 1998).
The contextual rules are restricted and may not be applicable to every example, but the spelling rules are generally applicable and should have good coverage.
0
Training under this model involves estimation of parameter values for P(y), P(m) and P(x I y).
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names.
0
Space- or punctuation-delimited * 700 Mountain Avenue, 2d451, Murray Hill, NJ 07974, USA.
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration.
0
A totally non­ stochastic rule-based system such as Wang, Li, and Chang's will generally succeed in such cases, but of course runs the risk of overgeneration wherever the single-hanzi word is really intended.
NER is useful in many NLP applications such as information extraction, question answering, etc .Chieu and Ng have shown that the maximum entropy framework is able to use global information directly from various sources.
0
This is an iterative method that improves the estimation of the parameters at each iteration.
While many systems had similar performance, the results offered interesting insights, especially, about the relative performance of statistical and rule-based systems.
0
The bias of automatic methods in favor of statistical systems seems to be less pronounced on out-of-domain test data.
The AdaBoost algorithm was developed for supervised learning.
0
The weak hypothesis chosen was then restricted to be a predictor in favor of this label.
They used graph-based label propagation for cross-lingual knowledge transfer and used the projected labels as features in an unsupervised model.
0
Note that since all English vertices were extracted from the parallel text, we will have an initial label distribution for all vertices in Ve.
They incorporated instance-weighting into a mixture-model framework, and found that it yielded consistent improvements over a wide range of baselines.
0
In future work we plan to try this approach with more competitive SMT systems, and to extend instance weighting to other standard SMT components such as the LM, lexical phrase weights, and lexicalized distortion.
Replacing this with a ranked evaluation seems to be more suitable.
0
The bootstrap method has been critized by Riezler and Maxwell (2005) and Collins et al. (2005), as being too optimistic in deciding for statistical significant difference between systems.
In this paper the author evaluates machine translation performance for six European language pairs that participated in a shared task: translating French, German, Spanish texts to English and back.
0
We collected around 300–400 judgements per judgement type (adequacy or fluency), per system, per language pair.
It is probably the first analysis of Arabic parsing of this kind.
0
Instead of offsetting new topics with punctuation, writers of MSA in sert connectives such as � wa and � fa to link new elements to both preceding clauses and the text as a whole.
Explanations for this phenomenon are relative informativeness of lexicalization, insensitivity to morphology and the effect of variable word order and these factors lead to syntactic disambiguation.
0
Our analysis begins with a description of syntactic ambiguity in unvocalized MSA text (§2).
The features were weighted within a logistic model that gave an overall weight that was applied to the phrase pair and MAP-smoothed relative-frequency estimates which were combined linearly with relative-frequency estimates from an in-domain phrase table.
0
Le m´edicament de r´ef´erence de Silapo est EPREX/ERYPO, qui contient de l’´epo´etine alfa.
A large number of rules are needed for coverage of the domain, suggesting that a fairly large number of labeled examples should be required to train a classifier.
0
In our implementation, we make perhaps the simplest choice of weak hypothesis.
They focused on phrases which two Named Entities, and proceed in two stages.
0
We checked whether the discovered links are listed in WordNet.
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance.
0
To evaluate proper-name identification, we randomly se­ lected 186 sentences containing 12,000 hanzi from our test corpus and segmented the text automatically, tagging personal names; note that for names, there is always a sin­ gle unambiguous answer, unlike the more general question of which segmentation is correct.
They incorporated instance-weighting into a mixture-model framework, and found that it yielded consistent improvements over a wide range of baselines.
0
Linear weights are difficult to incorporate into the standard MERT procedure because they are “hidden” within a top-level probability that represents the linear combination.1 Following previous work (Foster and Kuhn, 2007), we circumvent this problem by choosing weights to optimize corpus loglikelihood, which is roughly speaking the training criterion used by the LM and TM themselves.