source_text
stringlengths 27
368
| label
int64 0
1
| target_text
stringlengths 1
5.38k
|
---|---|---|
The manual evaluation of scoring translation on a graded scale from 1â5 seems to be very hard to perform. | 0 | For statistics on this test set, refer to Figure 1. |
The problem of coreference resolution has received considerable attention, including theoretical discourse models and supervised machine learning systems. | 0 | The DempsterShafer rule for combining pdfs is: to {C}, meaning that it is 70% sure the correct hypothesis is C. The intersection of these sets is the null set because these beliefs are contradictory. |
Two general approaches are presented and two combination techniques are described for each approach. | 0 | The results in Table 2 were achieved on the development set. |
The authors show that PATB is similar to other tree-banks but that annotation consistency remains low. | 0 | splitPUNC restores the convention of the WSJ. |
They found replacing it with a ranked evaluation to be more suitable. | 0 | This is demonstrated by average scores over all systems, in terms of BLEU, fluency and adequacy, as displayed in Figure 5. |
In this paper, the authors are of the opinion that the sequence models-based approaches usually treat token-level tag assignment as the primary latent variable. | 0 | 7 Conclusion and Future Work. |
For the perplexity and translation tasks, we used SRILM to build a 5-gram English language model on 834 million tokens from Europarl v6 (Koehn, 2005) and the 2011 Workshop on Machine Translation News Crawl corpus with duplicate lines removed. | 0 | Therefore, for n-gram wn1 , all leftward extensions wn0 are an adjacent block in the n + 1-gram array. |
Most IE researchers have been creating paraphrase knowledge by hand and specific tasks. | 0 | The links can solve the problem. |
In this paper, Ben and Riloff present a coreference resolver called BABAR that focuses on the use of contextual-role knowledge for coreference resolution. | 0 | Given an anaphor and candidate, BABAR checks (1) whether the semantic classes of the anaphor intersect with the semantic expectations of the caseframe that extracts the candidate, and (2) whether the semantic classes of the candidate intersect with the semantic ex pectations of the caseframe that extracts the anaphor. |
The bias of automatic methods in favour of statistical systems seems to be less pronounced on out-of-domain test data. | 0 | Systran submitted their commercial rule-based system that was not tuned to the Europarl corpus. |
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text. | 0 | Two of the Mainlanders also cluster close together but, interestingly, not particularly close to the Taiwan speakers; the third Mainlander is much more similar to the Taiwan speakers. |
The contextual rules are restricted and may not be applicable to every example, but the spelling rules are generally applicable and should have good coverage. | 0 | For the experiments in this paper we made a couple of additional modifications to the CoBoost algorithm. |
The approach has been successfully tested on the 8 000-word Verbmobil task. | 0 | During the search process, a partial hypothesis is extended by choosing a source sentence position, which has not been aligned with a target sentence position yet. |
They have made use of local and global features to deal with the instances of same token in a document. | 0 | MENE has only been tested on MUC7. |
A beam search concept is applied as in speech recognition. | 0 | The inverted alignment probability p(bijbiô1; I; J) and the lexicon probability p(fbi jei) are obtained by relative frequency estimates from the Viterbi alignment path after the final training iteration. |
While many systems had similar performance, the results offered interesting insights, especially, about the relative performance of statistical and rule-based systems. | 0 | Since the inclusion of out-ofdomain test data was a very late decision, the participants were not informed of this. |
BABAR's performance in both domains of terrorism and natural disaster, and the contextual-role knowledge in pronouns have shown successful results. | 0 | 3 The Coreference Resolution Model. |
One can trivially create situations in which strictly binary-branching trees are combined to create a tree with only the root node and the terminal nodes, a completely flat structure. | 0 | Features and context were initially introduced into the models, but they refused to offer any gains in performance. |
The overall performance of their joint framework demonstrated that a probability distribution obtained over mere syntactic contexts using a Treebank grammar and a data-driven lexicon outperformed upper bounds proposed by previous joint disambiguation systems and achieved segmentation and parsing results on a par with state-of-the-art standalone applications results. | 0 | The accuracy results for segmentation, tagging and parsing using our different models and our standard data split are summarized in Table 1. |
Due to many similarly performing systems, the author was not able to draw strong conclusions on the question of correlation of manual and automatic evaluation metrics. | 0 | We can check, what the consequences of less manual annotation of results would have been: With half the number of manual judgements, we can distinguish about 40% of the systems, 10% less. |
They have made use of local and global features to deal with the instances of same token in a document. | 0 | Note that we check for , the word preceding the consecutive sequence of initCaps tokens, since person prefixes like Mr., Dr., etc are not part of person names, whereas corporate suffixes like Corp., Inc., etc are part of corporate names. |
This paper offers a broad insight into of Arabic constituency parsing by analyzing the interplay of linguistic phenomena, annotation choices, and model design. | 0 | (2009b) evaluated the Bikel parser using the same ATB split, but only reported dev set results with gold POS tags for sentences of length ⤠40. |
The authors show that PATB is similar to other tree-banks but that annotation consistency remains low. | 0 | We weight edges using a unigram language model estimated with Good- Turing smoothing. |
This paper discusses the Potsdam Commentary Corpus, a corpus of german assembeled by potsdam university. | 0 | information structure. |
Explanations for this phenomenon are relative informativeness of lexicalization, insensitivity to morphology and the effect of variable word order and these factors lead to syntactic disambiguation. | 0 | However, in practice, unknown word models also make the distribution improper. |
They employed a PCFG-based generative framework to make both syntactic and morphological disambiguation decisions which were not only theoretically clean and linguistically justified but also probabilistically appropriate and empirically sound. | 0 | It should be clear from the onset that the particle b (“in”) in ‘bcl’ may then attach higher than the bare noun cl (“shadow”). |
Their results show that their high performance NER use less training data than other systems. | 0 | Lexicon Feature: The string of the token is used as a feature. |
This paper presents methods to query N-gram language models, minimizing time and space costs. | 0 | We used this data to build an unpruned ARPA file with IRSTLM’s improved-kneser-ney option and the default three pieces. |
They have made use of local and global features to deal with the instances of same token in a document. | 0 | For locations, tokens are matched against unigrams, and sequences of two consecutive tokens are matched against bigrams. |
This topic has been getting more attention, driven by the needs of various NLP applications. | 0 | However, there are phrases which express the same meanings even though they do not share the same keyword. |
Throughout this paper we compare with several packages: SRILM 1.5.12 (Stolcke, 2002) is a popular toolkit based on tries used in several decoders. | 0 | Searching a probing hash table consists of hashing the key, indexing the corresponding bucket, and scanning buckets until a matching key is found or an empty bucket is encountered, in which case the key does not exist in the table. |
The overall performance of their joint framework demonstrated that a probability distribution obtained over mere syntactic contexts using a Treebank grammar and a data-driven lexicon outperformed upper bounds proposed by previous joint disambiguation systems and achieved segmentation and parsing results on a par with state-of-the-art standalone applications results. | 0 | Figure 1 depicts the lattice for a 2-words sentence bclm hneim. |
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance. | 0 | The less favored reading may be selected in certain contexts, however; in the case of , for example, the nominal reading jiang4 will be selected if there is morphological information, such as a following plural affix ir, menD that renders the nominal reading likely, as we shall see in Section 4.3. |
Each out-of-domain phrase pair was characterized by a set of simple features intended to reflect how useful it would be. | 0 | The 4th block contains instance-weighting models trained on all features, used within a MAP TM combination, and with a linear LM mixture. |
However, these approaches are ill-equipped to directly represent type-based constraints such as sparsity. | 0 | 3 68.9 50. |
However, using the top-level semantic classes of WordNet proved to be problematic as the class distinctions are too coarse. | 0 | Each pattern represents the role that a noun phrase plays in the surrounding context. |
Their work is closely related to recent approaches that incorporate the sparsity constraint into the POS induction process. | 0 | 3 54.4 33. |
The bias of automatic methods in favour of statistical systems seems to be less pronounced on out-of-domain test data. | 0 | The judgement of 4 in the first case will go to a vastly better system output than in the second case. |
Replacing this with a ranked evaluation seems to be more suitable. | 0 | Given a set of n sentences, we can compute the sample mean x� and sample variance s2 of the individual sentence judgements xi: The extend of the confidence interval [x−d, x+df can be computed by d = 1.96 ·�n (6) Pairwise Comparison: As for the automatic evaluation metric, we want to be able to rank different systems against each other, for which we need assessments of statistical significance on the differences between a pair of systems. |
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names. | 0 | On the first of these-the B set-our system had 64% recall and 86% precision; on the second-the C set-it had 33% recall and 19% precision. |
The first method builds on results from (Yarowsky 95) and (Blum and Mitchell 98). | 0 | The new algorithm, which we call CoBoost, uses labeled and unlabeled data and builds two classifiers in parallel. |
The problem of coreference resolution has received considerable attention, including theoretical discourse models and supervised machine learning systems. | 0 | conceptual relationship in the discourse. |
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing. | 0 | On the one hand, the type-level error rate is not calibrated for the number of n-grams in the sample. |
Koehn and Monz carried out an extensive manual and automatic evaluation of machine translation performance on European language pairs. | 0 | While it is essential to be fluent in the target language, it is not strictly necessary to know the source language, if a reference translation was given. |
The departure from the traditional token-based tagging approach allow them to explicitly capture type-level distributional properties of valid POS tag assignments as part of the model. | 0 | In contrast to results reported in Johnson (2007), we found that the per P (Ti|T âi, β) n (f,v)âWi P (v|Ti, f, W âi, T âi, β) formance of our Gibbs sampler on the basic 1TW model stabilized very quickly after about 10 full it All of the probabilities on the right-hand-side are Dirichlet, distributions which can be computed analytically given counts. |
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing. | 0 | Evalb is a Java re-implementation of the standard labeled precision/recall metric.12 The ATB gives all punctuation a single tag. |
Das and Petrov, in this paper, approached inducing unsupervised part-of-speech taggers for languages that had no labeled training data, but had translated text in a resource-rich language. | 0 | We use the universal POS tagset of Petrov et al. (2011) in our experiments.10 This set C consists of the following 12 coarse-grained tags: NOUN (nouns), VERB (verbs), ADJ (adjectives), ADV (adverbs), PRON (pronouns), DET (determiners), ADP (prepositions or postpositions), NUM (numerals), CONJ (conjunctions), PRT (particles), PUNC (punctuation marks) and X (a catch-all for other categories such as abbreviations or foreign words). |
NER is useful in many NLP applications such as information extraction, question answering, etc .Chieu and Ng have shown that the maximum entropy framework is able to use global information directly from various sources. | 0 | In addition, each feature function is a binary function. |
All the texts were annotated by two people. | 0 | ⢠Bridging links: the annotator is asked to specify the type as part-whole, cause-effect (e.g., She had an accident. |
They focused on phrases which two Named Entities, and proceed in two stages. | 0 | We can make several observations on the cause of errors. |
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers. | 0 | 16 As one reviewer points out, one problem with the unigram model chosen here is that there is still a. tendency to pick a segmentation containing fewer words. |
The authors in this paper describe a search procedure for statistical machine translation (MT) based on dynamic programming (DP). | 0 | Search t0 CPU time #search mWER Method [sec] error [%] QmS 0.0 0.07 108 42:6 1.0 0.13 85 37:8 2.5 0.35 44 36:6 5.0 1.92 4 34:6 10.0 10.6 0 34:5 IbmS 0.0 0.14 108 43:4 1.0 0.3 84 39:5 2.5 0.8 45 39:1 5.0 4.99 7 38:3 10.0 28.52 0 38:2 Table 6 shows example translations obtained by the three different approaches. |
The AdaBoost algorithm was developed for supervised learning. | 0 | In the named entity domain these rules were Each of these rules was given a strength of 0.9999. |
Replacing this with a ranked evaluation seems to be more suitable. | 0 | So, this was a surprise element due to practical reasons, not malice. |
On the basis of that observation, they described a class of formalisms which they called Linear Context- Free Rewriting Systems (LCFRs), and showed they were recognizable in polynomial time and generated only semilinear languages. | 0 | In this paper, we outline how such family of formalisms can be defined, and show that like CFG's, each member possesses a number of desirable linguistic and computational properties: in particular, the constant growth property and polynomial recognizability. |
They showed that it was useful to abstract away from the details of the formalism, and examine the nature of their derivation process as reflected by properties their trees, find that several of the formalisms considered can be seen as being closely related since they have derivation tree sets with the same structure as those produced by Context-Free Grammars. | 0 | It can be easily shown from Thatcher's result that the path set of every local set is a regular set. |
Here we present two algorithms. | 0 | In this work we extended the AdaBoost.MH (Schapire and Singer 98) algorithm to the cotraining case. |
The approach assumes that the word reordering is restricted to a few positions in the source sentence. | 0 | On average, 6 reference translations per automatic translation are available. |
Two general approaches are presented and two combination techniques are described for each approach. | 0 | We performed three experiments to evaluate our techniques. |
The departure from the traditional token-based tagging approach allow them to explicitly capture type-level distributional properties of valid POS tag assignments as part of the model. | 0 | The English data comes from the WSJ portion of the Penn Treebank and the other languages from the training set of the CoNLL-X multilingual dependency parsing shared task. |
Due to many similarly performing systems, the author was not able to draw strong conclusions on the question of correlation of manual and automatic evaluation metrics. | 0 | While we used the standard metrics of the community, the we way presented translations and prompted for assessment differed from other evaluation campaigns. |
However, using the top-level semantic classes of WordNet proved to be problematic as the class distinctions are too coarse. | 0 | For example, a story can mention âthe FBIâ, âthe White Houseâ, or âthe weatherâ without any prior referent in the story. |
The texts were annotated with the RSTtool. | 0 | The government has to make a decision, and do it quickly. |
Nevertheless, only a part of this corpus (10 texts), which the authors name "core corpus", is annotated with all this information. | 0 | Assigning rhetorical relations thus poses questions that can often be answered only subjectively. |
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words. | 0 | For example, the Wang, Li, and Chang system fails on the sequence 1:f:p:]nian2 nei4 sa3 in (k) since 1F nian2 is a possible, but rare, family name, which also happens to be written the same as the very common word meaning 'year.' |
Vijay-Shankar et all considered the structural descriptions produced by various grammatical formalisms in terms of the complexity of the paths and the relationship between paths in the sets of structural descriptions that each system can generate. | 1 | We consider the structural descriptions produced by various grammatical formalisms in terms of the complexity of the paths and the relationship between paths in the sets of structural descriptions that each system can generate. |
In this paper, Ben and Riloff present a coreference resolver called BABAR that focuses on the use of contextual-role knowledge for coreference resolution. | 0 | If âgunâ and ârevolverâ refer to the same object, then it should also be acceptable to say that Fred was âkilled with a gunâ and that the burglar âfireda revolverâ. |
The first method builds on results from (Yarowsky 95) and (Blum and Mitchell 98). | 0 | Having found (spelling, context) pairs in the parsed data, a number of features are extracted. |
The code is open source, has minimal dependencies, and offers both C++ and Java interfaces for integration. | 0 | Much could be done to further reduce memory consumption. |
The departure from the traditional token-based tagging approach allow them to explicitly capture type-level distributional properties of valid POS tag assignments as part of the model. | 0 | In addition, this formulation results in a dramatic reduction in the number of model parameters thereby, enabling unusually rapid training. |
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words. | 0 | For previously unseen hanzi in given names, Chang et al. assign a uniform small cost; but we know that some unseen hanzi are merely acci dentally missing, whereas others are missing for a reason-for example, because they have a bad connotation. |
Combining multiple highly-accurate independent parsers yields promising results. | 0 | The resulting parsers surpass the best previously published performance results for the Penn Treebank. |
However, these approaches are ill-equipped to directly represent type-based constraints such as sparsity. | 0 | Specifically, we assume each word type W consists of feature-value pairs (f, v). |
Their method did not assume any knowledge about the target language, making it applicable to a wide array of resource-poor languages. | 0 | For each trigram type x2 x3 x4 in a sequence x1 x2 x3 x4 x5, we count how many times that trigram type co-occurs with the different instantiations of each concept, and compute the point-wise mutual information (PMI) between the two.5 The similarity between two trigram types is given by summing over the PMI values over feature instantiations that they have in common. |
There are clustering approaches that assign a single POS tag to each word type. | 0 | 3 68.4 49. |
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance. | 0 | Obviously, the presence of a title after a potential name N increases the probability that N is in fact a name. |
They proposed an unsupervised method to discover paraphrases from a large untagged corpus. | 0 | In the âPerson â Personâ domain, 618 keywords are found, and in the âCountry â Countryâ domain, 303 keywords are found. |
The corpus was annoted with different linguitic information. | 0 | Annotation of syntactic structure for the core corpus has just begun. |
Throughout this paper we compare with several packages: SRILM 1.5.12 (Stolcke, 2002) is a popular toolkit based on tries used in several decoders. | 0 | In our case multi-threading is trivial because our data structures are read-only and uncached. |
For the perplexity and translation tasks, we used SRILM to build a 5-gram English language model on 834 million tokens from Europarl v6 (Koehn, 2005) and the 2011 Workshop on Machine Translation News Crawl corpus with duplicate lines removed. | 0 | 0750271 and by the DARPA GALE program. |
The authors believe automatic paraphrase discovery is an important component for building a fully automatic information extraction system. | 0 | We picked two domains, the CC-domain and the âPerson â Companyâ domain (PC-domain), for the evaluation, as the entire system output was too large to evaluate. |
The authors in this paper describe a search procedure for statistical machine translation (MT) based on dynamic programming (DP). | 0 | 2.1 Inverted Alignments. |
Using less training data than other systems, their NER can perform as well as other state-of-the-art NERs. | 0 | At most one feature in this group will be set to 1. |
In order to handle the necessary word reordering as an optimization problem within the dynamic programming approach, they describe a solution to the traveling salesman problem (TSP) which is based on dynamic programming. | 0 | mein 5. |
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance. | 0 | An anti-greedy algorithm, AG: instead of the longest match, take the. |
The authors cluster NE instance pairs based on the words in the context using bag-of-words methods. | 0 | This limitation is the obstacle to making the technology âopen domainâ. |
The PROBING data structure uses linear probing hash tables and is designed for speed. | 0 | These performance gains transfer to improved system runtime performance; though we focused on Moses, our code is the best lossless option with cdec and Joshua. |
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words. | 0 | (b) F.i'JJI! |
The contextual rules are restricted and may not be applicable to every example, but the spelling rules are generally applicable and should have good coverage. | 0 | (Hearst 92) describes a method for extracting hyponyms from a corpus (pairs of words in "isa" relations). |
They showed the efficacy of graph-based label propagation for projecting part-of-speech information across languages. | 0 | We would like to thank Ryan McDonald for numerous discussions on this topic. |
Most IE researchers have been creating paraphrase knowledge by hand and specific tasks. | 0 | We focus on phrases which connect two Named Entities (NEs), and proceed in two stages. |
The bias of automatic methods in favour of statistical systems seems to be less pronounced on out-of-domain test data. | 0 | Judges varied in the average score they handed out. |
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names. | 0 | This suggests that the backoff model is as reasonable a model as we can use in the absence of further information about the expected cost of a plural form. |
The bias of automatic methods in favour of statistical systems seems to be less pronounced on out-of-domain test data. | 0 | In addition to the Europarl test set, we also collected 29 editorials from the Project Syndicate website2, which are published in all the four languages of the shared task. |
This paper talks about Exploiting Diversity in Natural Language Processing: Combining Parsers. | 0 | The estimation of the probabilities in the model is carried out as shown in Equation 4. |
This paper discusses the Potsdam Commentary Corpus, a corpus of german assembeled by potsdam university. | 0 | Future work along these lines will incorporate other layers of annotation, in particular the syntax information. |
Here we show how non-projective dependency parsing can be achieved by combining a data driven projective parser with special graph transformation techniques. | 0 | Thus, most broad-coverage parsers based on dependency grammar have been restricted to projective structures. |
However, using the top-level semantic classes of WordNet proved to be problematic as the class distinctions are too coarse. | 0 | For the disasters domain, 8245 texts were used for training and the 40 test documents contained 447 anaphoric links. |
The contextual rules are restricted and may not be applicable to every example, but the spelling rules are generally applicable and should have good coverage. | 0 | 2 We now introduce a new algorithm for learning from unlabeled examples, which we will call DLCoTrain (DL stands for decision list, the term Cotrain is taken from (Blum and Mitchell 98)). |
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers. | 0 | It can also be seen clearly in this plot that two of the Taiwan speakers cluster very closely together, and the third Tai wan speaker is also close in the most significant dimension (the x axis). |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.