source_text
stringlengths 27
368
| label
int64 0
1
| target_text
stringlengths 1
5.38k
|
---|---|---|
In order to create good-sized vectors for similarity calculation, they had to set a high frequency threshold. | 0 | Next, for each pair of NE categories, we collect all the contexts and find the keywords which are topical for that NE category pair. |
We chose one of four labels for each example: location, person, organization, or noise where the noise category was used for items that were outside the three categories. | 0 | The |
It is probably the first analysis of Arabic parsing of this kind. | 0 | 40 75. |
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing. | 0 | 92 77. |
The code is open source, has minimal dependencies, and offers both C++ and Java interfaces for integration. | 0 | However, TRIE partitions storage by n-gram length, so walking the trie reads N disjoint pages. |
Using a treebank grammar, the data-driven lexicon, and a linguistically motivated unknown-tokens handling technique their model outperformed previous pipelined, integrated or factorized systems for Hebrew morphological and syntactic processing, yielding an error reduction of 12% over the best published results so far. | 0 | We conjecture that this trend may continue by incorporating additional information, e.g., three-dimensional models as proposed by Tsarfaty and Sima’an (2007). |
They extended previous work on discriminative weighting by using a finer granularity, focusing on the properties of instances rather than corpus components, and used simpler training procedure. | 0 | This highly effective approach is not directly applicable to the multinomial models used for core SMT components, which have no natural method for combining split features, so we rely on an instance-weighting approach (Jiang and Zhai, 2007) to downweight domain-specific examples in OUT. |
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration. | 0 | Affix Pron Base category N found N missed (recall) N correct (precision) t,-,7 The second issue is that rare family names can be responsible for overgeneration, especially if these names are otherwise common as single-hanzi words. |
For the perplexity and translation tasks, we used SRILM to build a 5-gram English language model on 834 million tokens from Europarl v6 (Koehn, 2005) and the 2011 Workshop on Machine Translation News Crawl corpus with duplicate lines removed. | 0 | This technique was introduced by Clarkson and Rosenfeld (1997) and is also implemented by IRSTLM and BerkeleyLM’s compressed option. |
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration. | 0 | Thus, if one wants to segment words-for any purpose-from Chinese sentences, one faces a more difficult task than one does in English since one cannot use spacing as a guide. |
The fact that projective dependency parsers can never exactly reproduce the analyses found in non-projective tree banks is often neglected because of the relative scarcity of problematic constructions. | 0 | This is in contrast to dependency treebanks, e.g. |
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words. | 0 | from the subset of the United Informatics corpus not used in the training of the models. |
This paper discusses the Potsdam Commentary Corpus, a corpus of german assembeled by potsdam university. | 0 | Our annotators pointed out that very often they made almost random decisions as to what relation to choose, and where to locate the boundary of a span. |
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text. | 0 | Some of these approaches (e.g., Lin, Chiang, and Su [1993]) attempt to identify unknown words, but do not ac tually tag the words as belonging to one or another class of expression. |
The approach assumes that the word reordering is restricted to a few positions in the source sentence. | 0 | E.g. when 'Zahnarzttermin' is aligned to dentist's, the extended lexicon model might learn that 'Zahnarzttermin' actuallyhas to be aligned to both dentist's and ap pointment. |
It is annotated with several data: morphology, syntax, rhetorical structure, connectors, correference and informative structure. | 0 | This offers the well-known advantages for inter- changability, but it raises the question of how to query the corpus across levels of annotation. |
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration. | 0 | One hopes that such a corpus will be forth coming. |
From a DP-based solution to the traveling salesman problem, they present a novel technique to restrict the possible word reordering between source and target language in order to achieve an eÃcient search algorithm. | 0 | Using these states, we define partial hypothesis extensions, which are of the following type: (S0;C n fjg; j0) ! |
The experimental tests are carried out on the Verbmobil task which is a limited-domain spoken-language task. | 0 | Restrictions: Quasi-monotone Search The above search space is still too large to allow the translation of a medium length input sentence. |
Due to many similarly performing systems, they are not able to draw strong conclusions on the question of correlation of manual and automatic evaluation metrics. | 0 | Annotators argued for the importance of having correct and even multiple references. |
NER is useful in many NLP applications such as information extraction, question answering, etc .Chieu and Ng have shown that the maximum entropy framework is able to use global information directly from various sources. | 0 | We will refer to our system as MENERGI (Maximum Entropy Named Entity Recognizer using Global Information). |
The corpus was annoted with different linguitic information. | 0 | information structure. |
Explanations for this phenomenon are relative informativeness of lexicalization, insensitivity to morphology and the effect of variable word order and these factors lead to syntactic disambiguation. | 0 | Bikel 75 training trees 5000 10000 15000 Figure 3: Dev set learning curves for sentence lengths ⤠70. |
It is well-known that English constituency parsing models do not generalize to other languages and treebanks. | 0 | 72 78. |
They found replacing it with a ranked evaluation to be more suitable. | 0 | In words, the judgements are normalized, so that the average normalized judgement per judge is 3. |
This paper offers a broad insight into of Arabic constituency parsing by analyzing the interplay of linguistic phenomena, annotation choices, and model design. | 0 | Finally, we note that simple weighting gives nearly a 2% F1 improvement, whereas Goldberg and Tsarfaty (2008) found that unweighted lattices were more effective for Hebrew. |
An extended lexicon model is defined, and its likelihood is compared to a baseline lexicon model, which takes only single-word dependencies into account. | 0 | In this paper, we have presented a new, eÃcient DP-based search procedure for statistical machine translation. |
However, these approaches are ill-equipped to directly represent type-based constraints such as sparsity. | 0 | 1 61.2 43. |
the "Potsdam Commentary Corpus" or PCC consists of 170 commentaries from Ma¨rkische Allgemeine Zeitung, a German regional daily. | 0 | Links can be of two different kinds: anaphoric or bridging (definite noun phrases picking up an antecedent via world-knowledge). |
This topic has been getting more attention, driven by the needs of various NLP applications. | 0 | The most frequent NE category pairs are âPerson - Person (209,236), followed by âCountry - Coun- tryâ (95,123) and âPerson - Countryâ (75,509). |
For the perplexity and translation tasks, we used SRILM to build a 5-gram English language model on 834 million tokens from Europarl v6 (Koehn, 2005) and the 2011 Workshop on Machine Translation News Crawl corpus with duplicate lines removed. | 0 | In our case multi-threading is trivial because our data structures are read-only and uncached. |
They proposed an unsupervised method to discover paraphrases from a large untagged corpus. | 0 | Also, we donât know how many such paraphrase sets are necessary to cover even some everyday things or events. |
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text. | 0 | For instance, for TTS it is necessary to know that a particular sequence of hanzi is of a particular category because that knowl edge could affect the pronunciation; consider, for example the issues surrounding the pronunciation of ganl I qian2 discussed in Section 1. |
They have made use of local and global features to deal with the instances of same token in a document. | 0 | Two lists, Corporate-Suffix-List (for corporate suffixes) and Person-Prefix-List (for person prefixes), are collected from the training data. |
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance. | 0 | This flexibility, along with the simplicity of implementation and expansion, makes this framework an attractive base for continued research. |
This paper discusses the Potsdam Commentary Corpus, a corpus of german assembeled by potsdam university. | 0 | This is manifest in the lexical choices but 1 www.coli.unisb.de/â¼thorsten/tnt/ Dagmar Ziegler is up to her neck in debt. |
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance. | 0 | Our system fails in (a) because of$ shenl, a rare family name; the system identifies it as a family name, whereas it should be analyzed as part of the given name. |
Finally, several coreference systems have successfully incorporated anaphoricity determination modules. | 0 | These knowledge sources determine whether the contexts surrounding an anaphor and antecedent are compatible. |
This paper presents a maximum entropy-based named entity recognizer (NER). | 0 | However, it is unlikely that other occurrences of News Broadcasting Corp. in the same document also co-occur with Even. |
This paper presents methods to query N-gram language models, minimizing time and space costs. | 0 | These make left-to-right query patterns convenient, as the application need only provide a state and the word to append, then use the returned state to append another word, etc. |
However, these approaches are ill-equipped to directly represent type-based constraints such as sparsity. | 0 | The type-level tag assignments T generate features associated with word types W . The tag assignments constrain the HMM emission parameters θ. |
They have made use of local and global features to deal with the instances of same token in a document. | 0 | As such, global information from the whole context of a document is important to more accurately recognize named entities. |
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs. | 0 | Although these existential NPs do not need a prior referent, they may occur multiple times in a document. |
Their empirical results demonstrate that the type-based tagger rivals state-of-the-art tag-level taggers which employ more sophisticated learning mechanisms to exploit similar constraints. | 0 | We have presented a method for unsupervised part- of-speech tagging that considers a word type and its allowed POS tags as a primary element of the model. |
Two general approaches are presented and two combination techniques are described for each approach. | 0 | The Bayes models were able to achieve significantly higher precision than their non-parametric counterparts. |
The main result is that the combined system can recover non-projective dependencies with a precision sufficient to give a significant improvement in overall parsing accuracy, especially with respect to the exact match criterion, leading to the best reported performance for robust non-projective parsing of Czech. | 0 | The details of the transformation procedure are slightly different depending on the encoding schemes: d↑h let the linear head be the syntactic head). target arc must have the form wl −→ wm; if no target arc is found, Head is used as backoff. must have the form wl −→ wm and no outgoing arcs of the form wm p'↓ −→ wo; no backoff. |
The first method builds on results from (Yarowsky 95) and (Blum and Mitchell 98). | 0 | In this work we extended the AdaBoost.MH (Schapire and Singer 98) algorithm to the cotraining case. |
Their results suggested that it was possible to learn accurate POS taggers for languages which did not have any annotated data, but have translations into a resource-rich language. | 0 | Instead, we resort to an iterative update based method. |
From a DP-based solution to the traveling salesman problem, they present a novel technique to restrict the possible word reordering between source and target language in order to achieve an eÃcient search algorithm. | 0 | Subsets C of increasing cardinality c are processed. |
There are clustering approaches that assign a single POS tag to each word type. | 0 | The resulting model is compact, efficiently learnable and linguistically expressive. |
They extended previous work on discriminative weighting by using a finer granularity, focusing on the properties of instances rather than corpus components, and used simpler training procedure. | 0 | For developers of Statistical Machine Translation (SMT) systems, an additional complication is the heterogeneous nature of SMT components (word-alignment model, language model, translation model, etc. |
Their method did not assume any knowledge about the target language, making it applicable to a wide array of resource-poor languages. | 0 | As a result, its POS tag needs to be induced in the “No LP” case, while the 11A word level paired-t-test is significant at p < 0.01 for Danish, Greek, Italian, Portuguese, Spanish and Swedish, and p < 0.05 for Dutch. correct tag is available as a constraint feature in the “With LP” case. |
The TRIE model uses less memory than the smallest lossless alternative and is still faster than SRILM. | 0 | This makes memory usage comparable to our PROBING model. |
For the perplexity and translation tasks, we used SRILM to build a 5-gram English language model on 834 million tokens from Europarl v6 (Koehn, 2005) and the 2011 Workshop on Machine Translation News Crawl corpus with duplicate lines removed. | 0 | The PROBING model can perform optimistic searches by jumping to any n-gram without needing state and without any additional memory. |
The features were weighted within a logistic model that gave an overall weight that was applied to the phrase pair and MAP-smoothed relative-frequency estimates which were combined linearly with relative-frequency estimates from an in-domain phrase table. | 0 | We used it to score all phrase pairs in the OUT table, in order to provide a feature for the instance-weighting model. |
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement. | 0 | For E(ni1s), then, we substitute a smooth S against the number of class elements. |
Through parser combination we have reduced the precision error rate by 30% and the recall error rate by 6% compared to the best previously published result. | 0 | A receives a votes, and B receives b votes. |
There is no global pruning. | 0 | This approach is compared to another reordering scheme presented in (Berger et al., 1996). |
Replacing this with a ranked evaluation seems to be more suitable. | 0 | Our suspicion is that BLEU is very sensitive to jargon, to selecting exactly the right words, and not synonyms that human judges may appreciate as equally good. |
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text. | 0 | G1 and G2 are hanzi, we can estimate the probability of the sequence being a name as the product of: ⢠the probability that a word chosen randomly from a text will be a name-p(rule 1), and ⢠the probability that the name is of the form 1hanzi-family 2hanzi-given-p(rule 2), and ⢠the probability that the family name is the particular hanzi F1-p(rule 6), and ⢠the probability that the given name consists of the particular hanzi G1 and G2-p(rule 9) This model is essentially the one proposed in Chang et al. |
The TRIE model uses less memory than the smallest lossless alternative and is still faster than SRILM. | 0 | If the log backoff of wnf is also zero (it may not be in filtered models), then wf should be omitted from the state. |
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing. | 0 | Fourth, we show how to build better models for three different parsers. |
One can trivially create situations in which strictly binary-branching trees are combined to create a tree with only the root node and the terminal nodes, a completely flat structure. | 0 | Similar advances have been made in machine translation (Frederking and Nirenburg, 1994), speech recognition (Fiscus, 1997) and named entity recognition (Borthwick et al., 1998). |
Here we present two algorithms. | 0 | Note that in our formalism a weakhypothesis can abstain. |
Finally, several coreference systems have successfully incorporated anaphoricity determination modules. | 0 | Proper names are assumed to be coreferent if they match exactly, or if they closely match based on a few heuristics. |
The code is open source, has minimal dependencies, and offers both C++ and Java interfaces for integration. | 0 | Adam Pauls provided a pre-release comparison to BerkeleyLM and an initial Java interface. |
This paper talks about KenLM: Faster and Smaller Language Model Queries. | 0 | Compared to decoding, this task is cache-unfriendly in that repeated queries happen only as they naturally occur in text. |
This paper discusses the Potsdam Commentary Corpus, a corpus of german assembeled by potsdam university. | 0 | (Webber et al., 2003)). |
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement. | 0 | The final estimating equation is then: (3) Since the total of all these class estimates was about 10% off from the Turing estimate n1/N for the probability of all unseen hanzi, we renormalized the estimates so that they would sum to n 1jN. |
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names. | 0 | i..f,.. |
In this paper, the authors are of the opinion that the sequence models-based approaches usually treat token-level tag assignment as the primary latent variable. | 0 | Other approaches encode sparsity as a soft constraint. |
The PROBING data structure uses linear probing hash tables and is designed for speed. | 0 | There any many techniques for improving language model speed and reducing memory consumption. |
They used graph-based label propagation for cross-lingual knowledge transfer and used the projected labels as features in an unsupervised model. | 0 | We define a symmetric similarity function K(uZ7 uj) over two foreign language vertices uZ7 uj E Vf based on the co-occurrence statistics of the nine feature concepts given in Table 1. |
In this paper, the authors proposed an approach for instance-weighting phrase pairs in an out-of-domain corpus in order to improve in-domain performance. | 0 | This significantly underperforms log-linear combination. |
The first method builds on results from (Yarowsky 95) and (Blum and Mitchell 98). | 0 | Formally, let el (62) be the number of classification errors of the first (second) learner on the training data, and let Eco be the number of unlabeled examples on which the two classifiers disagree. |
Replacing this with a ranked evaluation seems to be more suitable. | 0 | Hence, the different averages of manual scores for the different language pairs reflect the behaviour of the judges, not the quality of the systems on different language pairs. |
The TRIE model uses less memory than the smallest lossless alternative and is still faster than SRILM. | 0 | The highestorder N-gram array omits backoff and the index, since these are not applicable. |
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names. | 0 | 2. |
They proposed a single joint model for performing both morphological segmentation and syntactic disambiguation which bypasses the associated circularity. | 0 | SEGTok(noH) is the segmentation accuracy ignoring mistakes involving the implicit definite article h.11 To evaluate our performance on the tagging task we report CPOS and FPOS corresponding to coarse- and fine-grained PoS tagging results (F1) measure. |
The PROBING data structure uses linear probing hash tables and is designed for speed. | 0 | Collisions between two keys in the table can be identified at model building time. |
BABAR's performance in both domains of terrorism and natural disaster, and the contextual-role knowledge in pronouns have shown successful results. | 0 | Section 2.2 then describes our representation for contextual roles and four types of contextual role knowledge that are learned from the training examples. |
Most IE researchers have been creating paraphrase knowledge by hand and specific tasks. | 0 | While there are other obstacles to completing this idea, we believe automatic paraphrase discovery is an important component for building a fully automatic information extraction system. |
The three parsers were trained and tuned by their creators on various sections of the WSJ portion of the Penn Treebank. | 0 | The development of a naïve Bayes classifier involves learning how much each parser should be trusted for the decisions it makes. |
This corpus has several advantages: it is annotated at different levels. | 0 | After the first step towards breadth had been taken with the PoS-tagging, RST annotation, and URML conversion of the entire corpus of 170 texts12 , emphasis shifted towards depth. |
This assumption, however, is not inherent to type-based tagging models. | 0 | Recent work has made significant progress on unsupervised POS tagging (Me´rialdo, 1994; Smith and Eisner, 2005; Haghighi and Klein, 2006; Johnson,2007; Goldwater and Griffiths, 2007; Gao and John son, 2008; Ravi and Knight, 2009). |
It is well-known that English constituency parsing models do not generalize to other languages and treebanks. | 0 | of Arabic. |
Here both parametric and non-parametric models are explored. | 0 | The first shows how constituent features and context do not help in deciding which parser to trust. |
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words. | 0 | Besides the lack of a clear definition of what constitutes a correct segmentation for a given Chinese sentence, there is the more general issue that the test corpora used in these evaluations differ from system to system, so meaningful comparison between systems is rendered even more difficult. |
Their empirical results demonstrate that the type-based tagger rivals state-of-the-art tag-level taggers which employ more sophisticated learning mechanisms to exploit similar constraints. | 0 | 5.2 Setup. |
This paper conducted research in the area of automatic paraphrase discovery. | 0 | Cluster phrases based on Links We now have a set of phrases which share a keyword. |
They proposed a single joint model for performing both morphological segmentation and syntactic disambiguation which bypasses the associated circularity. | 0 | Hence, we take the probability of the event fmnh analyzed as REL VB to be This means that we generate f and mnh independently depending on their corresponding PoS tags, and the context (as well as the syntactic relation between the two) is modeled via the derivation resulting in a sequence REL VB spanning the form fmnh. based on linear context. |
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words. | 0 | (See Sproat and Shih 1995.) |
They found replacing it with a ranked evaluation to be more suitable. | 0 | We therefore also normalized judgements on a per-sentence basis. |
The approach assumes that the word reordering is restricted to a few positions in the source sentence. | 0 | Using the concept of inverted alignments, we explicitly take care of the coverage constraint by introducing a coverage set C of source sentence positions that have been already processed. |
The authors show that PATB is similar to other tree-banks but that annotation consistency remains low. | 0 | input token, the segmentation is then performed deterministically given the 1-best analysis. |
In this work, they take a more direct approach and treat a word type and its allowed POS tags as a primary element of the model. | 0 | 1 2 3. |
In order to create good-sized vectors for similarity calculation, they had to set a high frequency threshold. | 0 | Cluster phrases based on Links We now have a set of phrases which share a keyword. |
In this paper the author evaluates machine translation performance for six European language pairs that participated in a shared task: translating French, German, Spanish texts to English and back. | 0 | We check, how likely only up to k = 20 better scores out of n = 100 would have been generated by two equal systems, using the binomial distribution: If p(0..k; n, p) < 0.05, or p(0..k; n, p) > 0.95 then we have a statistically significant difference between the systems. |
This paper conducted research in the area of automatic paraphrase discovery. | 0 | First, we will describe their method and compare it with our method. |
They found replacing it with a ranked evaluation to be more suitable. | 0 | We divide up each test set into blocks of 20 sentences (100 blocks for the in-domain test set, 53 blocks for the out-of-domain test set), check for each block, if one system has a higher BLEU score than the other, and then use the sign test. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.