source_text
stringlengths 27
368
| label
int64 0
1
| target_text
stringlengths 1
5.38k
|
---|---|---|
Replacing this with a ranked evaluation seems to be more suitable.
| 0 |
Hence, we use the bootstrap resampling method described by Koehn (2004).
|
Here we present two algorithms.
| 0 |
Yarowsky-cautious does not separate the spelling and contextual features, but does have a limit on the number of rules added at each stage.
|
On the basis of that observation, they described a class of formalisms which they called Linear Context- Free Rewriting Systems (LCFRs), and showed they were recognizable in polynomial time and generated only semilinear languages.
| 0 |
For example, in TAG's a derived auxiliary tree spans two substrings (to the left and right of the foot node), and the adjunction operation inserts another substring (spanned by the subtree under the node where adjunction takes place) between them (see Figure 3).
|
The bias of automatic methods in favour of statistical systems seems to be less pronounced on out-of-domain test data.
| 0 |
Following this method, we repeatedly — say, 1000 times — sample sets of sentences from the output of each system, measure their BLEU score, and use these 1000 BLEU scores as basis for estimating a confidence interval.
|
They proposed an unsupervised method to discover paraphrases from a large untagged corpus.
| 0 |
Obviously âLotusâ is part of the following clause rather than being the object of âestimatesâ and the extracted instance makes no sense.
|
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement.
| 0 |
The major problem for all segmentation systems remains the coverage afforded by the dictionary and the lexical rules used to augment the dictionary to deal with unseen words.
|
In this paper the author evaluates machine translation performance for six European language pairs that participated in a shared task: translating French, German, Spanish texts to English and back.
| 0 |
Not every annotator was fluent in both the source and the target language.
|
However, these approaches are ill-equipped to directly represent type-based constraints such as sparsity.
| 0 |
The original tag set for the CoNLL-X Dutch data set consists of compounded tags that are used to tag multi-word units (MWUs) resulting in a tag set of over 300 tags.
|
Using a treebank grammar, the data-driven lexicon, and a linguistically motivated unknown-tokens handling technique their model outperformed previous pipelined, integrated or factorized systems for Hebrew morphological and syntactic processing, yielding an error reduction of 12% over the best published results so far.
| 0 |
We use double-circles to indicate the space-delimited token boundaries.
|
It is annotated with several data: morphology, syntax, rhetorical structure, connectors, correference and informative structure.
| 0 |
A number of PCC commentaries will be read by professional news speakers and prosodic features be annotated, so that the various annotation layers can be set into correspondence with intonation patterns.
|
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration.
| 0 |
18 We are grateful to ChaoHuang Chang for providing us with this set.
|
The TRIE model uses less memory than the smallest lossless alternative and is still faster than SRILM.
| 0 |
The cost of storing these averages, in bits, is Because there are comparatively few unigrams, we elected to store them byte-aligned and unquantized, making every query faster.
|
Their work is closely related to recent approaches that incorporate the sparsity constraint into the POS induction process.
| 0 |
We present several variations for the lexical component P (T , W |Ï), each adding more complex pa rameterizations.
|
In this paper, the authors proposed an approach for instance-weighting phrase pairs in an out-of-domain corpus in order to improve in-domain performance.
| 0 |
Section 5 covers relevant previous work on SMT adaptation, and section 6 concludes.
|
In this paper the author evaluates machine translation performance for six European language pairs that participated in a shared task: translating French, German, Spanish texts to English and back.
| 0 |
The judgements tend to be done more in form of a ranking of the different systems.
|
This paper presents methods to query N-gram language models, minimizing time and space costs.
| 0 |
However, TRIE partitions storage by n-gram length, so walking the trie reads N disjoint pages.
|
Nevertheless, only a part of this corpus (10 texts), which the authors name "core corpus", is annotated with all this information.
| 0 |
Cur In order to ground such approaches in linguistic observation and description, a multi-level anno 10 For an exposition of the idea as applied to the task of text planning, see (Chiarcos, Stede 2004).
|
Here we present two algorithms.
| 0 |
At each iteration the algorithm increases the number of rules, while maintaining a high level of agreement between the spelling and contextual decision lists.
|
In this paper, the authors are of the opinion that the sequence models-based approaches usually treat token-level tag assignment as the primary latent variable.
| 0 |
Past work however, has typically associ n = n P (Ti)P (Wi|Ti) = i=1 1 n K n ated these features with token occurrences, typically in an HMM.
|
BABAR's performance in both domains of terrorism and natural disaster, and the contextual-role knowledge in pronouns have shown successful results.
| 0 |
The first step in the learning process is to generate training examples consisting of anaphor/antecedent resolutions.
|
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers.
| 0 |
Thus, if one wants to segment words-for any purpose-from Chinese sentences, one faces a more difficult task than one does in English since one cannot use spacing as a guide.
|
However, using the top-level semantic classes of WordNet proved to be problematic as the class distinctions are too coarse.
| 0 |
4.1 Corpora.
|
Most IE researchers have been creating paraphrase knowledge by hand and specific tasks.
| 0 |
and âHâ represents âHanson Plcâ.
|
BABAR's performance in both domains of terrorism and natural disaster, and the contextual-role knowledge in pronouns have shown successful results.
| 0 |
an event.
|
Human judges also pointed out difficulties with the evaluation of long sentences.
| 0 |
The normalized judgement per judge is the raw judgement plus (3 minus average raw judgement for this judge).
|
Throughout this paper we compare with several packages: SRILM 1.5.12 (Stolcke, 2002) is a popular toolkit based on tries used in several decoders.
| 0 |
The same numbers were used for each data structure.
|
The manual evaluation of scoring translation on a graded scale from 1–5 seemed to be very hard to perform.
| 0 |
While it is essential to be fluent in the target language, it is not strictly necessary to know the source language, if a reference translation was given.
|
Each out-of-domain phrase pair was characterized by a set of simple features intended to reflect how useful it would be.
| 0 |
The idea of distinguishing between general and domain-specific examples is due to Daum´e and Marcu (2006), who used a maximum-entropy model with latent variables to capture the degree of specificity.
|
Throughout this paper we compare with several packages: SRILM 1.5.12 (Stolcke, 2002) is a popular toolkit based on tries used in several decoders.
| 0 |
Though we are not able to calculate their memory usage on our model, results reported in their paper suggest lower memory consumption than TRIE on large-scale models, at the expense of CPU time.
|
This paper discusses the Potsdam Commentary Corpus, a corpus of german assembeled by potsdam university.
| 0 |
9 www.ling.unipotsdam.de/sfb/ Figure 2: Screenshot of Annis Linguistic Database 3.3 Symbolic and knowledge-based.
|
Instance-weighting approach improved over a wide range of baselines, giving gains of over 2 BLEU points over the best non-adapted baseline.
| 0 |
When OUT is large and distinct, its contribution can be controlled by training separate IN and OUT models, and weighting their combination.
|
The TRIE model uses less memory than the smallest lossless alternative and is still faster than SRILM.
| 0 |
Entries landing in the same bucket are said to collide.
|
However, using the top-level semantic classes of WordNet proved to be problematic as the class distinctions are too coarse.
| 0 |
Initially, the DempsterShafer model assumes that all hypotheses are equally likely, so it creates a set called θ that includes all hypotheses.
|
In order to handle the necessary word reordering as an optimization problem within the dynamic programming approach, they describe a solution to the traveling salesman problem (TSP) which is based on dynamic programming.
| 0 |
These alignment models are similar to the concept of hidden Markov models (HMM) in speech recognition.
|
Their empirical results demonstrate that the type-based tagger rivals state-of-the-art tag-level taggers which employ more sophisticated learning mechanisms to exploit similar constraints.
| 0 |
La ng ua ge 1T W + P RI O R + F E A T S E ng lis h D a ni s h D u tc h G e r m a n P or tu g u e s e S p a ni s h S w e di s h 2 1.
|
Their method did not assume any knowledge about the target language, making it applicable to a wide array of resource-poor languages.
| 0 |
Graph construction for structured prediction problems such as POS tagging is non-trivial: on the one hand, using individual words as the vertices throws away the context necessary for disambiguation; on the other hand, it is unclear how to define (sequence) similarity if the vertices correspond to entire sentences.
|
This paper conducted research in the area of automatic paraphrase discovery.
| 0 |
In this section, we will explain the algorithm step by step with examples.
|
The approach has been successfully tested on the 8 000-word Verbmobil task.
| 0 |
Thus, the effects of spontaneous speech are present in the corpus, e.g. the syntactic structure of the sentence is rather less restricted, however the effect of speech recognition errors is not covered.
|
However, using the top-level semantic classes of WordNet proved to be problematic as the class distinctions are too coarse.
| 0 |
The gender, number, and scoping KSs eliminate candidates from consideration.
|
They proposed an unsupervised method to discover paraphrases from a large untagged corpus.
| 0 |
In this subsection, we will report the results of the experiment, in terms of the number of words, phrases or clusters.
|
This paper offers a broad insight into of Arabic constituency parsing by analyzing the interplay of linguistic phenomena, annotation choices, and model design.
| 0 |
We are also grateful to Markus Dickinson, Ali Farghaly, Nizar Habash, Seth Kulick, David McCloskey, Claude Reichard, Ryan Roth, and Reut Tsarfaty for constructive discussions.
|
This paper presents methods to query N-gram language models, minimizing time and space costs.
| 0 |
For speed, we plan to implement the direct-mapped cache from BerkeleyLM.
|
While many systems had similar performance, the results offered interesting insights, especially, about the relative performance of statistical and rule-based systems.
| 0 |
The best answer to this is: many research labs have very competitive systems whose performance is hard to tell apart.
|
The corpus was annoted with different linguitic information.
| 0 |
A different but supplementary perspective on discourse-based information structure is taken 11ventionalized patterns (e.g., order of informa by one of our partner projects, which is inter tion in news reports).
|
We chose one of four labels for each example: location, person, organization, or noise where the noise category was used for items that were outside the three categories.
| 0 |
By this assumption, each element x E X can also be represented as (xi, x2) E X1 x X2.
|
They proposed a single joint model for performing both morphological segmentation and syntactic disambiguation which bypasses the associated circularity.
| 0 |
Models that employ this strategy are denoted hsp.
|
A large number of rules are needed for coverage of the domain, suggesting that a fairly large number of labeled examples should be required to train a classifier.
| 0 |
The task is to learn a function from an input string (proper name) to its type, which we will assume to be one of the categories Person, Organization, or Location.
|
While many systems had similar performance, the results offered interesting insights, especially, about the relative performance of statistical and rule-based systems.
| 0 |
We settled on contrastive evaluations of 5 system outputs for a single test sentence.
|
Their results show that their high performance NER use less training data than other systems.
| 0 |
However, 1 http://maxent.sourceforge.net 3.2 Testing.
|
The authors show that PATB is similar to other tree-banks but that annotation consistency remains low.
| 0 |
67 95.
|
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models.
| 0 |
In Section 6 we dis cuss other issues relating to how higher-order language models could be incorporated into the model.
|
In this paper, the authors proposed an approach for instance-weighting phrase pairs in an out-of-domain corpus in order to improve in-domain performance.
| 0 |
Moving beyond directly related work, major themes in SMT adaptation include the IR (Hildebrand et al., 2005; L¨u et al., 2007; Zhao et al., 2004) and mixture (Finch and Sumita, 2008; Foster and Kuhn, 2007; Koehn and Schroeder, 2007; L¨u et al., 2007) approaches for LMs and TMs described above, as well as methods for exploiting monolingual in-domain text, typically by translating it automatically and then performing self training (Bertoldi and Federico, 2009; Ueffing et al., 2007; Schwenk and Senellart, 2009).
|
This paper talks about Unsupervised Models for Named Entity Classification.
| 0 |
(Yarowsky 95) describes an algorithm for word-sense disambiguation that exploits redundancy in contextual features, and gives impressive performance.
|
The approach has been successfully tested on the 8 000-word Verbmobil task.
| 0 |
(f1; ;mg ; l) 2 (f1; ;mg n fl; l1g ; l0) !
|
The corpus was annoted with different linguitic information.
| 0 |
For the English RST-annotated corpus that is made available via LDC, his corresponding result is 62%.
|
They believe that global context is useful in most languages, as it is a natural tendency for authors to use abbreviations on entities already mentioned previously.
| 0 |
4.2 Global Features.
|
Using less training data than other systems, their NER can perform as well as other state-of-the-art NERs.
| 0 |
of Articles No.
|
Their method did not assume any knowledge about the target language, making it applicable to a wide array of resource-poor languages.
| 0 |
If an unlabeled vertex does not have a path to any labeled vertex, this term ensures that the converged marginal for this vertex will be uniform over all tags, allowing the middle word of such an unlabeled vertex to take on any of the possible tags.
|
the "Potsdam Commentary Corpus" or PCC consists of 170 commentaries from Ma¨rkische Allgemeine Zeitung, a German regional daily.
| 0 |
Thus we are interested not in extraction, but actual generation from representations that may be developed to different degrees of granularity.
|
It is well-known that English constituency parsing models do not generalize to other languages and treebanks.
| 0 |
For all other recursive NPs, we add a common annotation to the POS tag of the head (recursiveNPHead).
|
Vijay-Shankar et all considered the structural descriptions produced by various grammatical formalisms in terms of the complexity of the paths and the relationship between paths in the sets of structural descriptions that each system can generate.
| 0 |
The results described in this paper suggest a characterization of mild context-sensitivity in terms of generalized context-freeness.
|
The approach assumes that the word reordering is restricted to a few positions in the source sentence.
| 0 |
However there is no global pruning.
|
An extended lexicon model is defined, and its likelihood is compared to a baseline lexicon model, which takes only single-word dependencies into account.
| 0 |
Mai.
|
The overall performance of their joint framework demonstrated that a probability distribution obtained over mere syntactic contexts using a Treebank grammar and a data-driven lexicon outperformed upper bounds proposed by previous joint disambiguation systems and achieved segmentation and parsing results on a par with state-of-the-art standalone applications results.
| 0 |
A possible probabilistic model for assigning probabilities to complex analyses of a surface form may be and indeed recent sequential disambiguation models for Hebrew (Adler and Elhadad, 2006) and Arabic (Smith et al., 2005) present similar models.
|
This paper talks about Unsupervised Models for Named Entity Classification.
| 0 |
(Blum and Mitchell 98) offer a promising formulation of redundancy, also prove some results about how the use of unlabeled examples can help classification, and suggest an objective function when training with unlabeled examples.
|
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration.
| 0 |
2.
|
Due to many similarly performing systems, the author was not able to draw strong conclusions on the question of correlation of manual and automatic evaluation metrics.
| 0 |
While many systems had similar performance, the results offer interesting insights, especially about the relative performance of statistical and rule-based systems.
|
The problem of coreference resolution has received considerable attention, including theoretical discourse models and supervised machine learning systems.
| 1 |
The problem of coreference resolution has received considerable attention, including theoretical discourse models (e.g., (Grosz et al., 1995; Grosz and Sidner, 1998)), syntactic algorithms (e.g., (Hobbs, 1978; Lappin and Le- ass, 1994)), and supervised machine learning systems (Aone and Bennett, 1995; McCarthy and Lehnert, 1995; Ng and Cardie, 2002; Soon et al., 2001).
|
Their results show that their high performance NER use less training data than other systems.
| 0 |
Each feature group can be made up of many binary features.
|
The second algorithm builds on a boosting algorithm called AdaBoost.
| 0 |
By this assumption, each element x E X can also be represented as (xi, x2) E X1 x X2.
|
In order to create good-sized vectors for similarity calculation, they had to set a high frequency threshold.
| 0 |
If the expression is longer or complicated (like âA buys Bâ and âAâs purchase of Bâ), it is called âparaphraseâ, i.e. a set of phrases which express the same thing or event.
|
Here we present two algorithms.
| 0 |
The final strong hypothesis, denoted 1(x), is then the sign of a weighted sum of the weak hypotheses, 1(x) = sign (Vii atht(x)), where the weights at are determined during the run of the algorithm, as we describe below.
|
The contextual rules are restricted and may not be applicable to every example, but the spelling rules are generally applicable and should have good coverage.
| 0 |
We can now compare this algorithm to that of (Yarowsky 95).
|
In order to handle the necessary word reordering as an optimization problem within the dynamic programming approach, they describe a solution to the traveling salesman problem (TSP) which is based on dynamic programming.
| 0 |
There are 13 types of extensions needed to describe the verbgroup reordering.
|
There is no global pruning.
| 0 |
The complexity of the quasimonotone search is O(E3 J (R2+LR)).
|
In order to create good-sized vectors for similarity calculation, they had to set a high frequency threshold.
| 0 |
It is a relatively frequent word in the domain, but it can be used in different extraction scenarios.
|
The use of global features has shown excellent result in the performance on MUC-6 and MUC-7 test data.
| 0 |
Recently, statistical NERs have achieved results that are comparable to hand-coded systems.
|
However, these approaches are ill-equipped to directly represent type-based constraints such as sparsity.
| 0 |
The P (W |T , Ï) term in the lexicon component now decomposes as: n P (W |T , Ï) = n P (Wi|Ti, Ï) i=1 n   tions are not modeled by the standard HMM, which = n ï£ n P (v|ÏTi f ) instead can model token-level frequency.
|
Explanations for this phenomenon are relative informativeness of lexicalization, insensitivity to morphology and the effect of variable word order and these factors lead to syntactic disambiguation.
| 0 |
This annotation choice weakens splitIN.
|
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names.
| 0 |
However, until such standards are universally adopted in evaluating Chinese segmenters, claims about performance in terms of simple measures like percent correct should be taken with a grain of salt; see, again, Wu and Fung (1994) for further arguments supporting this conclusion.
|
They have made use of local and global features to deal with the instances of same token in a document.
| 0 |
â, because Bush is the first word, the initial caps might be due to its position (as in âThey put a freeze on . . .
|
In this work, they take a more direct approach and treat a word type and its allowed POS tags as a primary element of the model.
| 0 |
1 74.5 56.
|
There is no global pruning.
| 0 |
Table 5: Effect of the beam threshold on the number of search errors (147 sentences).
|
Here we show how non-projective dependency parsing can be achieved by combining a data driven projective parser with special graph transformation techniques.
| 0 |
In practice, we can therefore expect a trade-off such that increasing the amount of information encoded in arc labels will cause an increase in the accuracy of the inverse transformation but a decrease in the accuracy with which the parser can construct the labeled representations.
|
The manual evaluation of scoring translation on a graded scale from 1–5 seemed to be very hard to perform.
| 0 |
More judgements would have enabled us to make better distinctions, but it is not clear what the upper limit is.
|
Their method did not assume any knowledge about the target language, making it applicable to a wide array of resource-poor languages.
| 0 |
To initialize the graph for label propagation we use a supervised English tagger to label the English side of the bitext.7 We then simply count the individual labels of the English tokens and normalize the counts to produce tag distributions over English word types.
|
The authors in this paper describe a search procedure for statistical machine translation (MT) based on dynamic programming (DP).
| 0 |
Additionally, it works about 3 times as fast as the IBM style search.
|
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words.
| 0 |
This is to allow for fair comparison between the statistical method and GR, which is also purely dictionary-based.
|
They proposed a single joint model for performing both morphological segmentation and syntactic disambiguation which bypasses the associated circularity.
| 0 |
We currently simulate this by crafting a WCFG and feeding it to BitPar.
|
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement.
| 0 |
The first is an evaluation of the system's ability to mimic humans at the task of segmenting text into word-sized units; the second evaluates the proper-name identification; the third measures the performance on morphological analysis.
|
The authors believe automatic paraphrase discovery is an important component for building a fully automatic information extraction system.
| 0 |
In our experiment, we set the threshold of the TF/ITF score empirically using a small development corpus; a finer adjustment of the threshold could reduce the number of such keywords.
|
Here we show how non-projective dependency parsing can be achieved by combining a data driven projective parser with special graph transformation techniques.
| 0 |
On the other hand, given that all schemes have similar parsing accuracy overall, this means that the Path scheme is the least likely to introduce errors on projective arcs.
|
Here we present two algorithms.
| 0 |
Following the convention presented in earlier sections, we assume that each example is an instance pair of the from (xi ,i, x2,) where xj,, E 2x3 , j E 2}.
|
The authors in this paper describe a search procedure for statistical machine translation (MT) based on dynamic programming (DP).
| 0 |
This work has been supported as part of the Verbmobil project (contract number 01 IV 601 A) by the German Federal Ministry of Education, Science, Research and Technology and as part of the Eutrans project (ESPRIT project number 30268) by the European Community.
|
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models.
| 0 |
£ : _ADV: 5.88 If:!
|
In order to realize the full potential of dependency-based syntactic parsing, it is desirable to allow non-projective dependency structures.
| 0 |
The increase is generally higher for PDT than for DDT, which indicates a greater diversity in non-projective constructions.
|
Vijay-Shankar et all considered the structural descriptions produced by various grammatical formalisms in terms of the complexity of the paths and the relationship between paths in the sets of structural descriptions that each system can generate.
| 0 |
This class of formalisms have the properties that their derivation trees are local sets, and manipulate objects, using a finite number of composition operations that use a finite number of symbols.
|
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance.
| 0 |
paper, and is missing 6 examples from the A set.
|
In this paper the author evaluates machine translation performance for six European language pairs that participated in a shared task: translating French, German, Spanish texts to English and back.
| 0 |
This is not completely surprising, since all systems use very similar technology.
|
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs.
| 0 |
Ex: The regime gives itself the right...
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.