source_text
stringlengths 27
368
| label
int64 0
1
| target_text
stringlengths 1
5.38k
|
---|---|---|
Finally, several coreference systems have successfully incorporated anaphoricity determination modules.
| 0 |
Ex: The brigade, which attacked ...
|
NER is useful in many NLP applications such as information extraction, question answering, etc .Chieu and Ng have shown that the maximum entropy framework is able to use global information directly from various sources.
| 1 |
A named entity recognizer (NER) is useful in many NLP applications such as information extraction, question answering, etc. On its own, a NER can also provide users who are looking for person or organization names with quick information.
|
The authors in this paper describe a search procedure for statistical machine translation (MT) based on dynamic programming (DP).
| 0 |
The 13 positions of the source sentence are processed in the order shown.
|
The manual evaluation of scoring translation on a graded scale from 1–5 seemed to be very hard to perform.
| 0 |
At first glance, we quickly recognize that many systems are scored very similar, both in terms of manual judgement and BLEU.
|
Using a treebank grammar, the data-driven lexicon, and a linguistically motivated unknown-tokens handling technique their model outperformed previous pipelined, integrated or factorized systems for Hebrew morphological and syntactic processing, yielding an error reduction of 12% over the best published results so far.
| 0 |
Our use of an unweighted lattice reflects our belief that all the segmentations of the given input sentence are a-priori equally likely; the only reason to prefer one segmentation over the another is due to the overall syntactic context which is modeled via the PCFG derivations.
|
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text.
| 0 |
na me =>2 ha nzi fa mi ly 2 ha nzi gi ve n 5.
|
The authors show that PATB is similar to other tree-banks but that annotation consistency remains low.
| 0 |
Aside from adding a simple rule to correct alif deletion caused by the preposition J, no other language-specific processing is performed.
|
This paper offers a broad insight into of Arabic constituency parsing by analyzing the interplay of linguistic phenomena, annotation choices, and model design.
| 0 |
The pre terminal morphological analyses are mapped to the shortened âBiesâ tags provided with the tree- bank.
|
Finally, several coreference systems have successfully incorporated anaphoricity determination modules.
| 0 |
Training examples are generated automatically by identifying noun phrases that can be easily resolved with their antecedents using lexical and syntactic heuristics.
|
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text.
| 0 |
Morphologically derived words such as, xue2shengl+men0.
|
However, using the top-level semantic classes of WordNet proved to be problematic as the class distinctions are too coarse.
| 0 |
If no candidate satisfies this condition (which is often the case), then the anaphor is left unresolved.
|
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers.
| 0 |
level.
|
Das and Petrov, in this paper, approached inducing unsupervised part-of-speech taggers for languages that had no labeled training data, but had translated text in a resource-rich language.
| 0 |
To provide a thorough analysis, we evaluated three baselines and two oracles in addition to two variants of our graph-based approach.
|
One can trivially create situations in which strictly binary-branching trees are combined to create a tree with only the root node and the terminal nodes, a completely flat structure.
| 0 |
Recently, combination techniques have been investigated for part of speech tagging with positive results (van Halteren et al., 1998; Brill and Wu, 1998).
|
They showed that it was useful to abstract away from the details of the formalism, and examine the nature of their derivation process as reflected by properties their trees, find that several of the formalisms considered can be seen as being closely related since they have derivation tree sets with the same structure as those produced by Context-Free Grammars.
| 0 |
M works in space S(n) if for every string that M accepts no configuration exceeds space S(n).
|
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance.
| 0 |
Nonstochastic lexical-knowledge-based approaches have been much more numer ous.
|
There is no global pruning.
| 0 |
The following auxiliary quantity is defined: Qe0 (e; C; j) := probability of the best partial hypothesis (ei 1; bi 1), where C = fbkjk = 1; ; ig, bi = j, ei = e and eiô1 = e0.
|
Koehn and Monz carried out an extensive manual and automatic evaluation of machine translation performance on European language pairs.
| 0 |
For each sentence, we counted how many n-grams in the system output also occurred in the reference translation.
|
This paper discusses the Potsdam Commentary Corpus, a corpus of german assembeled by potsdam university.
| 0 |
Asking the annotator to also formulate the question is a way of arriving at more reproducible decisions.
|
The corpus was annoted with different linguitic information.
| 0 |
We use MMAX for this annotation as well.
|
The contextual rules are restricted and may not be applicable to every example, but the spelling rules are generally applicable and should have good coverage.
| 0 |
(3)) to be defined over unlabeled as well as labeled instances.
|
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers.
| 0 |
In Chinese text, individual characters of the script, to which we shall refer by their traditional name of hanzi,Z are written one after another with no intervening spaces; a Chinese sentence is shown in Figure 1.3 Partly as a result of this, the notion "word" has never played a role in Chinese philological tradition, and the idea that Chinese lacks any thing analogous to words in European languages has been prevalent among Western sinologists; see DeFrancis (1984).
|
The approach has been successfully tested on the 8 000-word Verbmobil task.
| 0 |
The alignment mapping is j ! i = aj from source position j to target position i = aj . The use of this alignment model raises major problems if a source word has to be aligned to several target words, e.g. when translating German compound nouns.
|
In order to create good-sized vectors for similarity calculation, they had to set a high frequency threshold.
| 0 |
3.3 Evaluation Results.
|
The second algorithm builds on a boosting algorithm called AdaBoost.
| 0 |
The pseudo-code describing the algorithm is given in Fig.
|
Koehn and Monz carried out an extensive manual and automatic evaluation of machine translation performance on European language pairs.
| 0 |
This is demonstrated by average scores over all systems, in terms of BLEU, fluency and adequacy, as displayed in Figure 5.
|
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.
| 0 |
Process nominals name the action of the transitive or ditransitive verb from which they derive.
|
The use of global features has shown excellent result in the performance on MUC-6 and MUC-7 test data.
| 0 |
Hence we decided to restrict ourselves to only information from the same document.
|
It is annotated with several data: morphology, syntax, rhetorical structure, connectors, correference and informative structure.
| 0 |
(Webber et al., 2003)).
|
Their results suggested that it was possible to learn accurate POS taggers for languages which did not have any annotated data, but have translations into a resource-rich language.
| 0 |
The “No LP” model does not outperform direct projection for German and Greek, but performs better for six out of eight languages.
|
the "Potsdam Commentary Corpus" or PCC consists of 170 commentaries from Ma¨rkische Allgemeine Zeitung, a German regional daily.
| 0 |
That is, we can use the discourse parser on PCC texts, emulating for instance a âco-reference oracleâ that adds the information from our co-reference annotations.
|
Nevertheless, only a part of this corpus (10 texts), which the authors name "core corpus", is annotated with all this information.
| 0 |
Thus it is possible, for illustration, to look for a noun phrase (syntax tier) marked as topic (information structure tier) that is in a bridging relation (co-reference tier) to some other noun phrase.
|
Through parser combination we have reduced the precision error rate by 30% and the recall error rate by 6% compared to the best previously published result.
| 0 |
C is the union of the sets of constituents suggested by the parsers. r(c) is a binary function returning t (for true) precisely when the constituent c E C should be included in the hypothesis.
|
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs.
| 0 |
2.2.3 Lexical Caseframe Expectations The second type of contextual role knowledge learned by BABAR is Lexical Caseframe Expectations, which are used by the CFLex knowledge source.
|
We chose one of four labels for each example: location, person, organization, or noise where the noise category was used for items that were outside the three categories.
| 0 |
The AdaBoost algorithm was developed for supervised learning.
|
However, these approaches are ill-equipped to directly represent type-based constraints such as sparsity.
| 0 |
5 64.7 42.
|
The approach has been successfully tested on the 8 000-word Verbmobil task.
| 0 |
A dynamic programming recursion similar to the one in Eq. 2 is evaluated.
|
Using a treebank grammar, the data-driven lexicon, and a linguistically motivated unknown-tokens handling technique their model outperformed previous pipelined, integrated or factorized systems for Hebrew morphological and syntactic processing, yielding an error reduction of 12% over the best published results so far.
| 0 |
One way to approach this discrepancy is to assume a preceding phase of morphological segmentation for extracting the different lexical items that exist at the token level (as is done, to the best of our knowledge, in all parsing related work on Arabic and its dialects (Chiang et al., 2006)).
|
The manual evaluation of scoring translation on a graded scale from 1â5 seems to be very hard to perform.
| 0 |
The manual scores are averages over the raw unnormalized scores.
|
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models.
| 0 |
We also thank ChaoHuang Chang, reviewers for the 1994 ACL conference, and four anonymous reviewers for Computational Linguistics for useful comments.
|
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers.
| 0 |
0 D/ nc 5.0 The minimal dictionary encoding this information is represented by the WFST in Figure 2(a).
|
They proposed an unsupervised method to discover paraphrases from a large untagged corpus.
| 0 |
In total, across all domains, we kept 13,976 phrases with keywords.
|
This paper talks about Pseudo-Projective Dependency Parsing.
| 0 |
Our experiments all concern the analytical annotation, and the first experiment is based only on the training part.
|
For the perplexity and translation tasks, we used SRILM to build a 5-gram English language model on 834 million tokens from Europarl v6 (Koehn, 2005) and the 2011 Workshop on Machine Translation News Crawl corpus with duplicate lines removed.
| 0 |
All language model queries issued by machine translation decoders follow a left-to-right pattern, starting with either the begin of sentence token or null context for mid-sentence fragments.
|
On the basis of that observation, they described a class of formalisms which they called Linear Context- Free Rewriting Systems (LCFRs), and showed they were recognizable in polynomial time and generated only semilinear languages.
| 0 |
Unification is used in LFG's to produce structures having two dependent spines of unbounded length as in Figure 5.
|
The corpus was annoted with different linguitic information.
| 0 |
On the other hand, we are interested in the application of rhetorical analysis or âdiscourse parsingâ (3.2 and 3.3), in text generation (3.4), and in exploiting the corpus for the development of improved models of discourse structure (3.5).
|
Nevertheless, only a part of this corpus (10 texts), which the authors name "core corpus", is annotated with all this information.
| 0 |
While RST (Mann, Thompson 1988) proposed that a single relation hold between adjacent text segments, SDRT (Asher, Lascarides 2003) maintains that multiple relations may hold simultaneously.
|
On the basis of that observation, they described a class of formalisms which they called Linear Context- Free Rewriting Systems (LCFRs), and showed they were recognizable in polynomial time and generated only semilinear languages.
| 0 |
Much of the study of grammatical systems in computational linguistics has been focused on the weak generative capacity of grammatical formalism.
|
This paper talks about Exploiting Diversity in Natural Language Processing: Combining Parsers.
| 0 |
Features and context were initially introduced into the models, but they refused to offer any gains in performance.
|
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration.
| 0 |
7.96 5.55 1 l...................................................................................................................................................................................................J..
|
Their results show that their high performance NER use less training data than other systems.
| 0 |
Corporate Suffixes and Person Prefixes of Other Occurrences (CSPP): If McCann has been seen as Mr. McCann somewhere else in the document, then one would like to give person a higher probability than organization.
|
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers.
| 0 |
Thus we have some confidence that our own performance is at least as good as that of Chang et al.
|
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers.
| 0 |
Church and Hanks [1989]), and we have used lists of character pairs ranked by mutual information to expand our own dictionary.
|
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.
| 0 |
Table 9: Dev set results for sentences of length ⤠70.
|
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers.
| 0 |
Note that the sets of possible classifiers for a given noun can easily be encoded on that noun by grammatical features, which can be referred to by finite-state grammatical rules.
|
They found replacing it with a ranked evaluation to be more suitable.
| 0 |
In fact, it is very difficult to maintain consistent standards, on what (say) an adequacy judgement of 3 means even for a specific language pair.
|
The approach assumes that the word reordering is restricted to a few positions in the source sentence.
| 0 |
On the other hand, only very restricted reorderings are necessary, e.g. for the translation direction from Table 2: Coverage set hypothesis extensions for the IBM reordering.
|
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words.
| 0 |
Let us notate the set of previously unseen, or novel, members of a category X as unseen(X); thus, novel members of the set of words derived in f, menO will be de noted unseen(f,).
|
Here we show how non-projective dependency parsing can be achieved by combining a data driven projective parser with special graph transformation techniques.
| 0 |
The most informative scheme, Head+Path, gives the highest scores, although with respect to Head the difference is not statistically significant, while the least informative scheme, Path – with almost the same performance on treebank transformation – is significantly lower (p < 0.01).
|
The code is open source, has minimal dependencies, and offers both C++ and Java interfaces for integration.
| 0 |
We run the baseline Moses system for the French-English track of the 2011 Workshop on Machine Translation,9 translating the 3003-sentence test set.
|
The AdaBoost algorithm was developed for supervised learning.
| 0 |
The two new terms force the two classifiers to agree, as much as possible, on the unlabeled examples.
|
Vijay-Shankar et all considered the structural descriptions produced by various grammatical formalisms in terms of the complexity of the paths and the relationship between paths in the sets of structural descriptions that each system can generate.
| 0 |
The independence of paths in the tree sets of the k tI grammatical formalism in this hierarchy can be shown by means of tree pumping lemma of the form t1ti3t .
|
In this paper the author evaluates machine translation performance for six European language pairs that participated in a shared task: translating French, German, Spanish texts to English and back.
| 0 |
There may be occasionally a system clearly at the top or at the bottom, but most systems are so close that it is hard to distinguish them.
|
They have made use of local and global features to deal with the instances of same token in a document.
| 0 |
Global features are extracted from other occurrences of the same token in the whole document.
|
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance.
| 0 |
orthographic words are thus only a starting point for further analysis and can only be regarded as a useful hint at the desired division of the sentence into words.
|
The first method builds on results from (Yarowsky 95) and (Blum and Mitchell 98).
| 0 |
Again, this deserves further investigation.
|
However, using the top-level semantic classes of WordNet proved to be problematic as the class distinctions are too coarse.
| 0 |
Relative pronouns with only 1 NP in scope..
|
The manual evaluation of scoring translation on a graded scale from 1–5 seemed to be very hard to perform.
| 0 |
Many human evaluation metrics have been proposed.
|
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration.
| 0 |
98 15.
|
Their results suggested that it was possible to learn accurate POS taggers for languages which did not have any annotated data, but have translations into a resource-rich language.
| 0 |
The foreign language vertices (denoted by Vf) correspond to foreign trigram types, exactly as in Subramanya et al. (2010).
|
Through parser combination we have reduced the precision error rate by 30% and the recall error rate by 6% compared to the best previously published result.
| 0 |
The development of a naïve Bayes classifier involves learning how much each parser should be trusted for the decisions it makes.
|
This assumption, however, is not inherent to type-based tagging models.
| 0 |
Uniform Tag Prior (1TW) Our initial lexicon component will be uniform over possible tag assignments as well as word types.
|
The AdaBoost algorithm was developed for supervised learning.
| 0 |
Denote the unthresholded classifiers after t — 1 rounds by git—1 and assume that it is the turn for the first classifier to be updated while the second one is kept fixed.
|
The resulting model is compact, efficiently learnable and linguistically expressive.
| 0 |
Performance typically stabilizes across languages after only a few number of iterations.
|
The TRIE model uses less memory than the smallest lossless alternative and is still faster than SRILM.
| 0 |
Given a sorted array A, these other packages use binary search to find keys in O(log |A|) time.
|
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance.
| 0 |
Jud ges A G G R ST M 1 M 2 M 3 T1 T2 T3 AG 0.7 0 0.7 0 0 . 4 3 0.4 2 0.6 0 0.6 0 0.6 2 0.5 9 GR 0.9 9 0 . 6 2 0.6 4 0.7 9 0.8 2 0.8 1 0.7 2 ST 0 . 6 4 0.6 7 0.8 0 0.8 4 0.8 2 0.7 4 M1 0.7 7 0.6 9 0.7 1 0.6 9 0.7 0 M2 0.7 2 0.7 3 0.7 1 0.7 0 M3 0.8 9 0.8 7 0.8 0 T1 0.8 8 0.8 2 T2 0.7 8 respectively, the recall and precision.
|
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words.
| 0 |
This method, one instance of which we term the "greedy algorithm" in our evaluation of our own system in Section 5, involves starting at the beginning (or end) of the sentence, finding the longest word starting (ending) at that point, and then repeating the process starting at the next (previous) hanzi until the end (begin ning) of the sentence is reached.
|
The authors believe automatic paraphrase discovery is an important component for building a fully automatic information extraction system.
| 0 |
The results, along with the total number of phrases, are shown in Table 1.
|
Here we present two algorithms.
| 0 |
In addition to a heuristic based on decision list learning, we also presented a boosting-like framework that builds on ideas from (Blum and Mitchell 98).
|
All the texts were annotated by two people.
| 0 |
The significant drop in number of pupils will begin in the fall of 2003.
|
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.
| 0 |
37 79.
|
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement.
| 0 |
(b) 89 :1 t& tal de cai2neng2 hen3 he DE talent very 'He has great talent' f.b ga ol hig h While the current algorithm correctly handles the (b) sentences, it fails to handle the (a) sentences, since it does not have enough information to know not to group the sequences.ma3lu4 and?]cai2neng2 respectively.
|
Nevertheless, only a part of this corpus (10 texts), which the authors name "core corpus", is annotated with all this information.
| 0 |
2.1 Part-of-speech tags.
|
They showed that it was useful to abstract away from the details of the formalism, and examine the nature of their derivation process as reflected by properties their trees, find that several of the formalisms considered can be seen as being closely related since they have derivation tree sets with the same structure as those produced by Context-Free Grammars.
| 0 |
For example, CFG's cannot produce trees of the form shown in Figure 1 in which there are nested dependencies between S and NP nodes appearing on the spine of the tree.
|
The texts were annotated with the RSTtool.
| 0 |
At present, the âPotsdam Commentary Corpusâ (henceforth âPCCâ for short) consists of 170 commentaries from Ma¨rkische Allgemeine Zeitung, a German regional daily.
|
Evaluation was done automatically using the BLEU score and manually on fluency and adequacy.
| 0 |
The differences in difficulty are better reflected in the BLEU scores than in the raw un-normalized manual judgements.
|
Using less training data than other systems, their NER can perform as well as other state-of-the-art NERs.
| 0 |
He used an additional maximum entropy classifier that tries to correct mistakes by using reference resolution.
|
The first method builds on results from (Yarowsky 95) and (Blum and Mitchell 98).
| 0 |
By this assumption, each element x E X can also be represented as (xi, x2) E X1 x X2.
|
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers.
| 0 |
Gan's solution depends upon a fairly sophisticated language model that attempts to find valid syntactic, semantic, and lexical relations between objects of various linguistic types (hanzi, words, phrases).
|
The authors show that PATB is similar to other tree-banks but that annotation consistency remains low.
| 0 |
Third, we develop a human interpretable grammar that is competitive with a latent variable PCFG.
|
There is no global pruning.
| 0 |
While processing the source sentence monotonically, the initial state I is entered whenever there are no uncovered positions to the left of the rightmost covered position.
|
Foster et all describe a new approach to SMT adaptation that weights out-of-domain phrase pairs according to their relevance to the target domain, determined by both how similar to it they appear to be, and whether they belong to general language or not.
| 0 |
The matching sentence pairs are then added to the IN corpus, and the system is re-trained.
|
It is well-known that English constituency parsing models do not generalize to other languages and treebanks.
| 0 |
We showed in §2 that lexical ambiguity explains the underperformance of these categories.
|
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers.
| 0 |
Much confusion has been sown about Chinese writing by the use of the term ideograph, suggesting that hanzi somehow directly represent ideas.
|
They extended previous work on discriminative weighting by using a finer granularity, focusing on the properties of instances rather than corpus components, and used simpler training procedure.
| 0 |
Its success depends on the two domains being relatively close, and on the OUT corpus not being so large as to overwhelm the contribution of IN.
|
This paper talks about KenLM: Faster and Smaller Language Model Queries.
| 0 |
Quantization can be improved by jointly encoding probability and backoff.
|
Evaluation was done automatically using the BLEU score and manually on fluency and adequacy.
| 0 |
At the very least, we are creating a data resource (the manual annotations) that may the basis of future research in evaluation metrics.
|
The authors show that PATB is similar to other tree-banks but that annotation consistency remains low.
| 0 |
Less frequently studied is the interplay among language, annotation choices, and parsing model design (Levy and Manning, 2003; Ku¨ bler, 2005).
|
This paper presents methods to query N-gram language models, minimizing time and space costs.
| 0 |
It is generally considered to be fast (Pauls 29 − 1 probabilities and 2' − 2 non-zero backoffs. and Klein, 2011), with a default implementation based on hash tables within each trie node.
|
In order to realize the full potential of dependency-based syntactic parsing, it is desirable to allow non-projective dependency structures.
| 0 |
Table 5 shows the overall parsing accuracy attained with the three different encoding schemes, compared to the baseline (no special arc labels) and to training directly on non-projective dependency graphs.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.