source_text
stringlengths 27
368
| label
int64 0
1
| target_text
stringlengths 1
5.38k
|
---|---|---|
This paper talks about KenLM: Faster and Smaller Language Model Queries. | 0 | However, this optimistic search would not visit the entries necessary to store backoff information in the outgoing state. |
They have made use of local and global features to deal with the instances of same token in a document. | 0 | For example, if is found in the list of person first names, the feature PersonFirstName is set to 1. |
In order to realize the full potential of dependency-based syntactic parsing, it is desirable to allow non-projective dependency structures. | 0 | However, the accuracy is considerably higher than previously reported results for robust non-projective parsing of Czech, with a best performance of 73% UAS (Holan, 2004). |
This paper presents a maximum entropy-based named entity recognizer (NER). | 0 | We feel that information from a whole corpus might turn out to be noisy if the documents in the corpus are not of the same genre. |
Here both parametric and non-parametric models are explored. | 0 | None of the parsers produce parses with crossing brackets, so none of them votes for both of the assumed constituents. |
The approach has been successfully tested on the 8 000-word Verbmobil task. | 0 | Final (F): The rest of the sentence is processed monotonically taking account of the already covered positions. |
This topic has been getting more attention, driven by the needs of various NLP applications. | 0 | However, it is desirable if we can separate them. |
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs. | 0 | If âgunâ and ârevolverâ refer to the same object, then it should also be acceptable to say that Fred was âkilled with a gunâ and that the burglar âfireda revolverâ. |
The contextual rules are restricted and may not be applicable to every example, but the spelling rules are generally applicable and should have good coverage. | 0 | , for A. T.&T. nonalpha.. . |
They have made use of local and global features to deal with the instances of same token in a document. | 0 | The maximum entropy framework estimates probabilities based on the principle of making as few assumptions as possible, other than the constraints imposed. |
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing. | 0 | For example, one of the ATB samples was the determiner -"" ; dhalikâthat.â The sample occurred in 1507 corpus po sitions, and we found that the annotations were consistent. |
In this paper the author evaluates machine translation performance for six European language pairs that participated in a shared task: translating French, German, Spanish texts to English and back. | 0 | At first glance, we quickly recognize that many systems are scored very similar, both in terms of manual judgement and BLEU. |
This paper talks about Exploiting Diversity in Natural Language Processing: Combining Parsers. | 0 | One hybridization strategy is to let the parsers vote on constituents' membership in the hypothesized set. |
Here both parametric and non-parametric models are explored. | 0 | From this we see that a finer-grained model for parser combination, at least for the features we have examined, will not give us any additional power. |
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration. | 0 | An analysis of nouns that occur in both the singular and the plural in our database reveals that there is indeed a slight but significant positive correlation-R2 = 0.20, p < 0.005; see Figure 6. |
There are clustering approaches that assign a single POS tag to each word type. | 0 | This assumption, however, is not inherent to type-based tagging models. |
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names. | 0 | (See also Wu and Fung [1994].) |
Here both parametric and non-parametric models are explored. | 0 | The theory has also been validated empirically. |
Two general approaches are presented and two combination techniques are described for each approach. | 0 | All four of the techniques studied result in parsing systems that perform better than any previously reported. |
The contextual rules are restricted and may not be applicable to every example, but the spelling rules are generally applicable and should have good coverage. | 0 | The problem can be represented as a graph with 2N vertices corresponding to the members of X1 and X2. |
In this work, they take a more direct approach and treat a word type and its allowed POS tags as a primary element of the model. | 0 | 9 65.5 46. |
Combining multiple highly-accurate independent parsers yields promising results. | 0 | We model each parse as the decisions made to create it, and model those decisions as independent events. |
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text. | 0 | For irt the Good-Turing estimate just discussed gives us an estimate of p(unseen(f,) I f,)-the probability of observing a previously unseen instance of a construction in ft given that we know that we have a construction in f,. |
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers. | 0 | (b) F.i'JJI! |
The main result is that the combined system can recover non-projective dependencies with a precision sufficient to give a significant improvement in overall parsing accuracy, especially with respect to the exact match criterion, leading to the best reported performance for robust non-projective parsing of Czech. | 0 | Using the terminology of Kahane et al. (1998), we say that jedna is the syntactic head of Z, while je is its linear head in the projectivized representation. |
They extended previous work on discriminative weighting by using a finer granularity, focusing on the properties of instances rather than corpus components, and used simpler training procedure. | 0 | The paper is structured as follows. |
The bias of automatic methods in favour of statistical systems seemed to be less pronounced on out-of-domain test data. | 0 | We therefore also normalized judgements on a per-sentence basis. |
Human judges also pointed out difficulties with the evaluation of long sentences. | 0 | Also, the argument has been made that machine translation performance should be evaluated via task-based evaluation metrics, i.e. how much it assists performing a useful task, such as supporting human translators or aiding the analysis of texts. |
Replacing this with a ranked evaluation seems to be more suitable. | 0 | In words, the judgements are normalized, so that the average normalized judgement per judge is 3. |
It is well-known that English constituency parsing models do not generalize to other languages and treebanks. | 0 | As we have said, parse quality decreases with sentence length. |
This paper presents methods to query N-gram language models, minimizing time and space costs. | 0 | As the name implies, space is O(m) and linear in the number of entries. |
the "Potsdam Commentary Corpus" or PCC consists of 170 commentaries from Ma¨rkische Allgemeine Zeitung, a German regional daily. | 0 | A corpus of German newspaper commentaries has been assembled at Potsdam University, and annotated with different linguistic information, to different degrees. |
This paper talks about KenLM: Faster and Smaller Language Model Queries. | 0 | This would result in better rest cost estimation and better pruning.10 In general, tighter, but well factored, integration between the decoder and language model should produce a significant speed improvement. |
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words. | 0 | Some of these approaches (e.g., Lin, Chiang, and Su [1993]) attempt to identify unknown words, but do not ac tually tag the words as belonging to one or another class of expression. |
This paper presents methods to query N-gram language models, minimizing time and space costs. | 0 | The compressed variant uses block compression and is rather slow as a result. |
They incorporated instance-weighting into a mixture-model framework, and found that it yielded consistent improvements over a wide range of baselines. | 0 | Mixing, smoothing, and instance-feature weights are learned at the same time using an efficient maximum-likelihood procedure that relies on only a small in-domain development corpus. |
Finally, several coreference systems have successfully incorporated anaphoricity determination modules. | 0 | 3.1 General Knowledge Sources. |
The code is open source, has minimal dependencies, and offers both C++ and Java interfaces for integration. | 0 | The returned state s(wn1) may then be used in a followon query p(wn+1js(wn1)) that extends the previous query by one word. |
The bias of automatic methods in favour of statistical systems seems to be less pronounced on out-of-domain test data. | 0 | Almost all annotators reported difficulties in maintaining a consistent standard for fluency and adequacy judgements, but nevertheless most did not explicitly move towards a ranking-based evaluation. |
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance. | 0 | (In this figure eps is c) be implemented, though, such as a maximal-grouping strategy (as suggested by one reviewer of this paper); or a pairwise-grouping strategy, whereby long sequences of unattached hanzi are grouped into two-hanzi words (which may have some prosodic motivation). |
The authors show that PATB is similar to other tree-banks but that annotation consistency remains low. | 0 | (2006). |
It is well-known that English constituency parsing models do not generalize to other languages and treebanks. | 0 | A better approach would be to distin guish between these cases, possibly by drawing on the vast linguistic work on Arabic connectives (AlBatal, 1990). |
In this paper the author evaluates machine translation performance for six European language pairs that participated in a shared task: translating French, German, Spanish texts to English and back. | 0 | This data set of manual judgements should provide a fruitful resource for research on better automatic scoring methods. |
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words. | 0 | Thus we have some confidence that our own performance is at least as good as that of Chang et al. |
This paper presents methods to query N-gram language models, minimizing time and space costs. | 0 | We indicate whether a context with zero log backoff will extend using the sign bit: +0.0 for contexts that extend and −0.0 for contexts that do not extend. |
This paper offers a broad insight into of Arabic constituency parsing by analyzing the interplay of linguistic phenomena, annotation choices, and model design. | 0 | None of the models attach the attributive adjectives correctly. |
The TRIE model uses less memory than the smallest lossless alternative and is still faster than SRILM. | 0 | Compared with the widely- SRILM, our is 2.4 times as fast while using 57% of the mem- The structure is a trie with bit-level packing, sorted records, interpolation search, and optional quantization aimed lower memory consumption. simultaneously uses less memory than the smallest lossless baseline and less CPU than the baseline. |
The bias of automatic methods in favour of statistical systems seems to be less pronounced on out-of-domain test data. | 0 | HR0011-06-C-0022. |
This paper discusses the Potsdam Commentary Corpus, a corpus of german assembeled by potsdam university. | 0 | The PCC is not the result of a funded project. |
It outperformed strong unsupervised baselines as well as approaches that relied on direct projections, and bridged the gap between purely supervised and unsupervised POS tagging models. | 0 | While the first three models get three to four tags wrong, our best model gets only one word wrong and is the most accurate among the four models for this example. |
The features were weighted within a logistic model that gave an overall weight that was applied to the phrase pair and MAP-smoothed relative-frequency estimates which were combined linearly with relative-frequency estimates from an in-domain phrase table. | 0 | Sentence pairs are the natural instances for SMT, but sentences often contain a mix of domain-specific and general language. |
These clusters are computed using an SVD variant without relying on transitional structure. | 0 | Specifically, for the ith word type, the set of token-level tags associated with token occurrences of this word, denoted t(i), must all take the value Ti to have nonzero mass. Thus in the context of Gibbs sampling, if we want to block sample Ti with t(i), we only need sample values for Ti and consider this setting of t(i). |
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs. | 0 | Section 2.1 describes how BABAR generates training examples to use in the learning process. |
They proposed a single joint model for performing both morphological segmentation and syntactic disambiguation which bypasses the associated circularity. | 0 | The attachment in such cases encompasses a long distance dependency that cannot be captured by Markovian processes that are typically used for morphological disambiguation. |
The resulting model is compact, efficiently learnable and linguistically expressive. | 0 | (2009). |
In this paper the author evaluates machine translation performance for six European language pairs that participated in a shared task: translating French, German, Spanish texts to English and back. | 0 | The easiest language pair according to BLEU (English-French: 28.33) received worse manual scores than the hardest (English-German: 14.01). |
They incorporated instance-weighting into a mixture-model framework, and found that it yielded consistent improvements over a wide range of baselines. | 0 | These estimates are in turn combined linearly with relative-frequency estimates from an in-domain phrase table. |
In order to create good-sized vectors for similarity calculation, they had to set a high frequency threshold. | 0 | If the same pair of NE instances is used with different phrases, these phrases are likely to be paraphrases. |
This paper talks about KenLM: Faster and Smaller Language Model Queries. | 0 | The structure uses linear probing hash tables and is designed for speed. |
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words. | 0 | Two sets of examples from Gan are given in (1) and (2) (:::::: Gan's Appendix B, exx. |
They have made use of local and global features to deal with the instances of same token in a document. | 0 | We have used the Java-based opennlp maximum entropy package1. |
This paper talks about KenLM: Faster and Smaller Language Model Queries. | 0 | Timing is based on plentiful memory. |
However, using the top-level semantic classes of WordNet proved to be problematic as the class distinctions are too coarse. | 0 | Given a document to process, BABAR uses four modules to perform coreference resolution. |
They have made use of local and global features to deal with the instances of same token in a document. | 0 | However, their system is a hybrid of hand-coded rules and machine learning methods. |
The approach assumes that the word reordering is restricted to a few positions in the source sentence. | 0 | For each source word f, the list of its possible translations e is sorted according to p(fje) puni(e), where puni(e) is the unigram probability of the English word e. It is suÃcient to consider only the best 50 words. |
On the basis of that observation, they described a class of formalisms which they called Linear Context- Free Rewriting Systems (LCFRs), and showed they were recognizable in polynomial time and generated only semilinear languages. | 0 | An ATM may be thought of as spawning independent processes for each applicable move. |
NER is useful in many NLP applications such as information extraction, question answering, etc .Chieu and Ng have shown that the maximum entropy framework is able to use global information directly from various sources. | 0 | Except our own and MENE + reference resolution, the results in Table 6 are all official MUC7 results. |
They have made use of local and global features to deal with the instances of same token in a document. | 0 | If it starts with a lower case letter, and contains both upper and lower case letters, then (mixedCaps, zone) is set to 1. |
They found replacing it with a ranked evaluation to be more suitable. | 0 | About half of the participants of last year’s shared task participated again. |
Using less training data than other systems, their NER can perform as well as other state-of-the-art NERs. | 0 | However, to classify a token , while Borthwick uses tokens from to (from two tokens before to two tokens after ), we used only the tokens , , and . Even with local features alone, MENERGI outperforms MENE (Borthwick, 1999). |
The authors cluster NE instance pairs based on the words in the context using bag-of-words methods. | 0 | Obviously âLotusâ is part of the following clause rather than being the object of âestimatesâ and the extracted instance makes no sense. |
The PROBING data structure uses linear probing hash tables and is designed for speed. | 0 | If the log backoff of wnf is also zero (it may not be in filtered models), then wf should be omitted from the state. |
The texts were annotated with the RSTtool. | 0 | They are also labelled for their topicality (yes / no), and this annotation is accompanied by a confidence value assigned by the annotator (since it is a more subjective matter). |
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words. | 0 | Jud ges A G G R ST M 1 M 2 M 3 T1 T2 T3 AG 0.7 0 0.7 0 0 . 4 3 0.4 2 0.6 0 0.6 0 0.6 2 0.5 9 GR 0.9 9 0 . 6 2 0.6 4 0.7 9 0.8 2 0.8 1 0.7 2 ST 0 . 6 4 0.6 7 0.8 0 0.8 4 0.8 2 0.7 4 M1 0.7 7 0.6 9 0.7 1 0.6 9 0.7 0 M2 0.7 2 0.7 3 0.7 1 0.7 0 M3 0.8 9 0.8 7 0.8 0 T1 0.8 8 0.8 2 T2 0.7 8 respectively, the recall and precision. |
They proposed an unsupervised method to discover paraphrases from a large untagged corpus. | 0 | Gather phrases using keywords Next, we select a keyword for each phrase â the top-ranked word based on the TF/IDF metric. |
BABAR's performance in both domains of terrorism and natural disaster, and the contextual-role knowledge in pronouns have shown successful results. | 0 | For natural disasters, BABAR generated 20,479 resolutions: 11,652 from lexical seeding and 8,827 from syntactic seeding. |
Their results suggested that it was possible to learn accurate POS taggers for languages which did not have any annotated data, but have translations into a resource-rich language. | 0 | Our final average POS tagging accuracy of 83.4% compares very favorably to the average accuracy of Berg-Kirkpatrick et al.’s monolingual unsupervised state-of-the-art model (73.0%), and considerably bridges the gap to fully supervised POS tagging performance (96.6%). |
The authors show that PATB is similar to other tree-banks but that annotation consistency remains low. | 0 | Consequently, all three parsers prefer the nominal reading. |
Their results suggested that it was possible to learn accurate POS taggers for languages which did not have any annotated data, but have translations into a resource-rich language. | 0 | Second, we treat the projected labels as features in an unsupervised model (§5), rather than using them directly for supervised training. |
An extended lexicon model is defined, and its likelihood is compared to a baseline lexicon model, which takes only single-word dependencies into account. | 0 | In German, the verbgroup usually consists of a left and a right verbal brace, whereas in English the words of the verbgroup usually form a sequence of consecutive words. |
It is probably the first analysis of Arabic parsing of this kind. | 0 | The 95% confidence intervals for type-level errors are (5580, 9440) for the ATB and (1400, 4610) for the WSJ. |
All the texts were annotated by two people. | 0 | Instead, the designs of the various annotation layers and the actual annotation work are results of a series of diploma theses, of studentsâ work in course projects, and to some extent of paid assistentships. |
They found replacing it with a ranked evaluation to be more suitable. | 0 | We assumed that such a contrastive assessment would be beneficial for an evaluation that essentially pits different systems against each other. |
Evaluation was done automatically using the BLEU score and manually on fluency and adequacy. | 0 | We are therefore applying a different method, which has been used at the 2005 DARPA/NIST evaluation. |
This paper offers a broad insight into of Arabic constituency parsing by analyzing the interplay of linguistic phenomena, annotation choices, and model design. | 0 | mark- ContainsVerb is especially effective for distinguishing root S nodes of equational sentences. |
This paper talks about KenLM: Faster and Smaller Language Model Queries. | 0 | For RandLM, we used the settings in the documentation: 8 bits per value and false positive probability 1 256. |
This paper conducted research in the area of automatic paraphrase discovery. | 0 | All the NE pair instances which co-occur separated by at most 4 chunks are collected along with information about their NE types and the phrase between the NEs (the âcontextâ). |
Nevertheless, only a part of this corpus (10 texts), which the authors name "core corpus", is annotated with all this information. | 0 | (Webber et al., 2003)). |
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement. | 0 | We further thank Dr. J.-S. |
Koehn and Monz carried out an extensive manual and automatic evaluation of machine translation performance on European language pairs. | 0 | For statistics on this test set, refer to Figure 1. |
For the perplexity and translation tasks, we used SRILM to build a 5-gram English language model on 834 million tokens from Europarl v6 (Koehn, 2005) and the 2011 Workshop on Machine Translation News Crawl corpus with duplicate lines removed. | 0 | Compared with the widely- SRILM, our is 2.4 times as fast while using 57% of the mem- The structure is a trie with bit-level packing, sorted records, interpolation search, and optional quantization aimed lower memory consumption. simultaneously uses less memory than the smallest lossless baseline and less CPU than the baseline. |
In this work, they take a more direct approach and treat a word type and its allowed POS tags as a primary element of the model. | 0 | Our empirical results demonstrate that the type-based tagger rivals state-of-the-art tag-level taggers which employ more sophisticated learning mechanisms to exploit similar constraints. |
There are clustering approaches that assign a single POS tag to each word type. | 0 | These methods demonstrated the benefits of incorporating linguistic features using a log-linear parameterization, but requires elaborate machinery for training. |
The manual evaluation of scoring translation on a graded scale from 1â5 seems to be very hard to perform. | 0 | About half of the participants of last year’s shared task participated again. |
They have made use of local and global features to deal with the instances of same token in a document. | 0 | We group the features used into feature groups. |
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance. | 0 | While the semantic aspect of radicals is by no means completely predictive, the semantic homogeneity of many classes is quite striking: for example 254 out of the 263 examples (97%) of the INSECT class listed by Wieger (1965, 77376) denote crawling or invertebrate animals; similarly 21 out of the 22 examples (95%) of the GHOST class (page 808) denote ghosts or spirits. |
The approach has been successfully tested on the 8 000-word Verbmobil task. | 0 | The computing time is low, since no reordering is carried out. |
On the basis of that observation, they described a class of formalisms which they called Linear Context- Free Rewriting Systems (LCFRs), and showed they were recognizable in polynomial time and generated only semilinear languages. | 0 | As with HG's derivation structures are annotated; in the case of TAG's, by the trees used for adjunction and addresses of nodes of the elementary tree where adjunctions occurred. |
Explanations for this phenomenon are relative informativeness of lexicalization, insensitivity to morphology and the effect of variable word order and these factors lead to syntactic disambiguation. | 0 | For verbs we add two features. |
Using a treebank grammar, the data-driven lexicon, and a linguistically motivated unknown-tokens handling technique their model outperformed previous pipelined, integrated or factorized systems for Hebrew morphological and syntactic processing, yielding an error reduction of 12% over the best published results so far. | 0 | In our model, however, all lattice paths are taken to be a-priori equally likely. |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.