source_text
stringlengths 27
368
| label
int64 0
1
| target_text
stringlengths 1
5.38k
|
---|---|---|
There is no global pruning. | 0 | (S; C; j); Not only the coverage set C and the positions j; j0, but also the verbgroup states S; S0 are taken into account. |
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement. | 0 | Clearly the percentage of productively formed words is quite small (for this particular corpus), meaning that dictionary entries are covering most of the 15 GR is .73 or 96%.. |
They incorporated instance-weighting into a mixture-model framework, and found that it yielded consistent improvements over a wide range of baselines. | 0 | This is consistent with the nature of these two settings: log-linear combination, which effectively takes the intersection of IN and OUT, does relatively better on NIST, where the domains are broader and closer together. |
On the basis of that observation, they described a class of formalisms which they called Linear Context- Free Rewriting Systems (LCFRs), and showed they were recognizable in polynomial time and generated only semilinear languages. | 0 | This allows an unbounded amount of information about two separate paths (e.g. an encoding of their length) to be combined and used to influence the later derivation. |
They showed the efficacy of graph-based label propagation for projecting part-of-speech information across languages. | 0 | We tried two versions of our graph-based approach: feature after the first stage of label propagation (Eq. |
Across eight European languages, their approach results in an average absolute improvement of 10.4% over a state-of-the-art baseline, and 16.7% over vanilla hidden Markov models induced with the Expectation Maximization algorithm. | 0 | This is similar to stacking the different feature instantiations into long (sparse) vectors and computing the cosine similarity between them. |
It is probably the first analysis of Arabic parsing of this kind. | 0 | F1 85 Berkeley 80 Stanford. |
The bias of automatic methods in favour of statistical systems seemed to be less pronounced on out-of-domain test data. | 0 | HR0011-06-C-0022. |
These clusters are computed using an SVD variant without relying on transitional structure. | 0 | We evaluate our model on seven languages exhibiting substantial syntactic variation. |
The overall performance of their joint framework demonstrated that a probability distribution obtained over mere syntactic contexts using a Treebank grammar and a data-driven lexicon outperformed upper bounds proposed by previous joint disambiguation systems and achieved segmentation and parsing results on a par with state-of-the-art standalone applications results. | 0 | This is done using a simple PCFG which is lexemebased. |
This paper discusses the Potsdam Commentary Corpus, a corpus of german assembeled by potsdam university. | 0 | It is difficult to motivate these days why one ministry should be exempt from cutbacks â at the expense of the others. |
BABAR's performance in both domains of terrorism and natural disaster, and the contextual-role knowledge in pronouns have shown successful results. | 0 | Reflexive pronouns with only 1 NP in scope.. |
They showed the efficacy of graph-based label propagation for projecting part-of-speech information across languages. | 0 | For each language under consideration, Petrov et al. (2011) provide a mapping A from the fine-grained language specific POS tags in the foreign treebank to the universal POS tags. |
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs. | 0 | Given an anaphor and candidate, BABAR checks (1) whether the semantic classes of the anaphor intersect with the semantic expectations of the caseframe that extracts the candidate, and (2) whether the semantic classes of the candidate intersect with the semantic ex pectations of the caseframe that extracts the anaphor. |
Their method did not assume any knowledge about the target language, making it applicable to a wide array of resource-poor languages. | 0 | More recently, Subramanya et al. (2010) defined a graph over the cliques in an underlying structured prediction model. |
Foster et all describe a new approach to SMT adaptation that weights out-of-domain phrase pairs according to their relevance to the target domain, determined by both how similar to it they appear to be, and whether they belong to general language or not. | 0 | Phrase tables were extracted from the IN and OUT training corpora (not the dev as was used for instance weighting models), and phrase pairs in the intersection of the IN and OUT phrase tables were used as positive examples, with two alternate definitions of negative examples: The classifier trained using the 2nd definition had higher accuracy on a development set. |
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers. | 0 | The percentage scores on the axis labels represent the amount of variation in the data explained by the dimension in question. |
It is probably the first analysis of Arabic parsing of this kind. | 0 | The ATB gives several different analyses to these words to indicate different types of coordination. |
This paper presents methods to query N-gram language models, minimizing time and space costs. | 0 | Language models that contain wi must also contain prefixes wi for 1 G i G k. Therefore, when the model is queried for p(wnjwn−1 1 ) but the longest matching suffix is wnf , it may return state s(wn1) = wnf since no longer context will be found. |
They have made use of local and global features to deal with the instances of same token in a document. | 0 | Otherwise, it is set to 0. |
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers. | 0 | We can better predict the probability of an unseen hanzi occurring in a name by computing a within-class Good-Turing estimate for each radical class. |
This topic has been getting more attention, driven by the needs of various NLP applications. | 0 | Step 3. |
For the perplexity and translation tasks, we used SRILM to build a 5-gram English language model on 834 million tokens from Europarl v6 (Koehn, 2005) and the 2011 Workshop on Machine Translation News Crawl corpus with duplicate lines removed. | 0 | As noted in Section 4.4, disk cache state is controlled by reading the entire binary file before each test begins. |
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers. | 0 | In this way, the method reported on here will necessarily be similar to a greedy method, though of course not identical. |
This paper talks about Unsupervised Models for Named Entity Classification. | 0 | N, portion of examples on which both classifiers give a label rather than abstaining), and the proportion of these examples on which the two classifiers agree. |
Vijay-Shankar et all considered the structural descriptions produced by various grammatical formalisms in terms of the complexity of the paths and the relationship between paths in the sets of structural descriptions that each system can generate. | 0 | We focus on this difference between the tree sets of CFG's and IG's, and formalize the notion of dependence between paths in a tree set in Section 3. |
It is well-known that English constituency parsing models do not generalize to other languages and treebanks. | 0 | The samples from each corpus were independently evaluated. |
This paper discusses the Potsdam Commentary Corpus, a corpus of german assembeled by potsdam university. | 0 | html 4 www.wagsoft.com/RSTTool assigning rhetorical relations is a process loaded with ambiguity and, possibly, subjectivity. |
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names. | 0 | Email: rlls@bell-labs. |
The problem of coreference resolution has received considerable attention, including theoretical discourse models and supervised machine learning systems. | 0 | (b) After they were released... |
Vijay-Shankar et all considered the structural descriptions produced by various grammatical formalisms in terms of the complexity of the paths and the relationship between paths in the sets of structural descriptions that each system can generate. | 0 | Like the string languages of MCTAG's, the complexity of the path set increases as the cardinality of the elementary tee sets increases, though both the string languages and path sets will always be semilinear. |
The approach assumes that the word reordering is restricted to a few positions in the source sentence. | 0 | The quasi-monotone search performs best in terms of both error rates mWER and SSER. |
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance. | 0 | 76 16. |
The manual evaluation of scoring translation on a graded scale from 1â5 seems to be very hard to perform. | 0 | The judgements tend to be done more in form of a ranking of the different systems. |
Nevertheless, only a part of this corpus (10 texts), which the authors name "core corpus", is annotated with all this information. | 0 | (Hartmann 1984), for example, used the term Reliefgebung to characterize the distibution of main and minor information in texts (similar to the notion of nuclearity in RST). |
The use of global features has shown excellent result in the performance on MUC-6 and MUC-7 test data. | 0 | A secondary reference resolution classifier has information on the class assigned by the primary classifier. |
The first method builds on results from (Yarowsky 95) and (Blum and Mitchell 98). | 0 | Unlabeled examples in the named-entity classification problem can reduce the need for supervision to a handful of seed rules. |
Across eight European languages, their approach results in an average absolute improvement of 10.4% over a state-of-the-art baseline, and 16.7% over vanilla hidden Markov models induced with the Expectation Maximization algorithm. | 0 | As indicated by bolding, for seven out of eight languages the improvements of the “With LP” setting are statistically significant with respect to the other models, including the “No LP” setting.11 Overall, it performs 10.4% better than the hitherto state-of-the-art feature-HMM baseline, and 4.6% better than direct projection, when we macro-average the accuracy over all languages. |
NER is useful in many NLP applications such as information extraction, question answering, etc .Chieu and Ng have shown that the maximum entropy framework is able to use global information directly from various sources. | 0 | Two lists, Corporate-Suffix-List (for corporate suffixes) and Person-Prefix-List (for person prefixes), are collected from the training data. |
The departure from the traditional token-based tagging approach allow them to explicitly capture type-level distributional properties of valid POS tag assignments as part of the model. | 0 | 7 68.3 56. |
Human judges also pointed out difficulties with the evaluation of long sentences. | 0 | Systems that generally do worse than others will receive a negative one. |
The fact that projective dependency parsers can never exactly reproduce the analyses found in non-projective tree banks is often neglected because of the relative scarcity of problematic constructions. | 0 | However, if we consider precision, recall and Fmeasure on non-projective dependencies only, as shown in Table 6, some differences begin to emerge. |
For the perplexity and translation tasks, we used SRILM to build a 5-gram English language model on 834 million tokens from Europarl v6 (Koehn, 2005) and the 2011 Workshop on Machine Translation News Crawl corpus with duplicate lines removed. | 0 | Therefore, performance is more closely tied to the underlying data structure than to the cache. |
In this paper, the authors are of the opinion that the sequence models-based approaches usually treat token-level tag assignment as the primary latent variable. | 0 | Conditioned on T , features of word types W are drawn. |
Throughout this paper we compare with several packages: SRILM 1.5.12 (Stolcke, 2002) is a popular toolkit based on tries used in several decoders. | 0 | For 2 < n < N, we use a hash table mapping from the n-gram to the probability and backoff3. |
They focused on phrases which two Named Entities, and proceed in two stages. | 0 | In this specific case, as these two titles could fill the same column of an IE table, we regarded them as paraphrases for the evaluation. |
It is probably the first analysis of Arabic parsing of this kind. | 0 | However, when grammatical relations like subject and object are evaluated, parsing performance drops considerably (Green et al., 2009). |
They showed better grammars to improve performance on both morphological and syntactic tasks, providing support for the advantage of a joint framework over pipelined or factorized ones. | 0 | Using a wide-coverage morphological analyzer based on (Itai et al., 2006) should cater for a better coverage, and incorporating lexical probabilities learned from a big (unannotated) corpus (cf. |
It outperformed strong unsupervised baselines as well as approaches that relied on direct projections, and bridged the gap between purely supervised and unsupervised POS tagging models. | 0 | Finally, we thank Kuzman Ganchev and the three anonymous reviewers for helpful suggestions and comments on earlier drafts of this paper. |
On the basis of that observation, they described a class of formalisms which they called Linear Context- Free Rewriting Systems (LCFRs), and showed they were recognizable in polynomial time and generated only semilinear languages. | 0 | We adapt the string pumping lemma for the class of languages corresponding to the complexity of the path set. |
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers. | 0 | For a sequence of hanzi that is a possible name, we wish to assign a probability to that sequence qua name. |
NER is useful in many NLP applications such as information extraction, question answering, etc .Chieu and Ng have shown that the maximum entropy framework is able to use global information directly from various sources. | 0 | The probability of the classes assigned to the words in a sentence in a document is defined as follows: where is determined by the maximum entropy classifier. |
The AdaBoost algorithm was developed for supervised learning. | 0 | The key to the methods we describe is redundancy in the unlabeled data. |
They found replacing it with a ranked evaluation to be more suitable. | 0 | The statistical systems seem to still lag behind the commercial rule-based competition when translating into morphological rich languages, as demonstrated by the results for English-German and English-French. |
The experimental tests are carried out on the Verbmobil task which is a limited-domain spoken-language task. | 0 | A straightforward way to find the shortest tour is by trying all possible permutations of the n cities. |
This topic has been getting more attention, driven by the needs of various NLP applications. | 0 | A total of 13,976 phrases were grouped. |
It is well-known that English constituency parsing models do not generalize to other languages and treebanks. | 0 | NN � .e NP NNP NP DTNNP NN � .e NP NP NNP NP Table 5: Evaluation of 100 randomly sampled variation nuclei types. |
In this work, they take a more direct approach and treat a word type and its allowed POS tags as a primary element of the model. | 0 | We evaluate our model on seven languages exhibiting substantial syntactic variation. |
The overall performance of their joint framework demonstrated that a probability distribution obtained over mere syntactic contexts using a Treebank grammar and a data-driven lexicon outperformed upper bounds proposed by previous joint disambiguation systems and achieved segmentation and parsing results on a par with state-of-the-art standalone applications results. | 0 | Previous work on morphological and syntactic disambiguation in Hebrew used different sets of data, different splits, differing annotation schemes, and different evaluation measures. |
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models. | 0 | (1991}, Gu and Mao (1994), and Nie, Jin, and Hannan (1994). |
This paper presents methods to query N-gram language models, minimizing time and space costs. | 0 | Applications such as machine translation use language model probability as a feature to assist in choosing between hypotheses. |
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words. | 0 | 17 They also provide a set of title-driven rules to identify names when they occur before titles such as $t. 1: xianlshengl 'Mr.' or i:l:itr!J tai2bei3 shi4zhang3 'Taipei Mayor.' |
The departure from the traditional token-based tagging approach allow them to explicitly capture type-level distributional properties of valid POS tag assignments as part of the model. | 0 | The type-level posterior term can be computed according to, P (Ti|W , T âi, β) â Note that each round of sampling Ti variables takes time proportional to the size of the corpus, as with the standard token-level HMM. |
This corpus has several advantages: it is annotated at different levels. | 0 | 2.1 Part-of-speech tags. |
The overall performance of their joint framework demonstrated that a probability distribution obtained over mere syntactic contexts using a Treebank grammar and a data-driven lexicon outperformed upper bounds proposed by previous joint disambiguation systems and achieved segmentation and parsing results on a par with state-of-the-art standalone applications results. | 0 | Furthermore, the systematic way in which particles are prefixed to one another and onto an open-class category gives rise to a distinct sort of morphological ambiguity: space-delimited tokens may be ambiguous between several different segmentation possibilities. |
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models. | 0 | The Wang, Li, and Chang system fails on fragment (b) because their system lacks the word youlyoul 'soberly' and misinterpreted the thus isolated first youl as being the final hanzi of the preceding name; similarly our system failed in fragment (h) since it is missing the abbreviation i:lJI! |
In order to realize the full potential of dependency-based syntactic parsing, it is desirable to allow non-projective dependency structures. | 0 | As shown in Table 3, the proportion of sentences containing some non-projective dependency ranges from about 15% in DDT to almost 25% in PDT. |
Vijay-Shankar et all considered the structural descriptions produced by various grammatical formalisms in terms of the complexity of the paths and the relationship between paths in the sets of structural descriptions that each system can generate. | 0 | The following rule corresponds to the above derivation, where 71, , 7k are derived from the auxiliary trees , , fik, respectively. for all addresses n in some elementary tree at which 7' can be adjoined. |
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs. | 0 | For each co-occurrence relation (noun/caseframe for CFLex, and caseframe/caseframe for CFNet), BABAR computes its log-likelihood value and looks it up in the Ï2 table to obtain a confidence level. |
Human judges also pointed out difficulties with the evaluation of long sentences. | 0 | In fact, it is very difficult to maintain consistent standards, on what (say) an adequacy judgement of 3 means even for a specific language pair. |
Evaluation was done automatically using the BLEU score and manually on fluency and adequacy. | 0 | Making the ten judgements (2 types for 5 systems) takes on average 2 minutes. |
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models. | 0 | The cost is computed as follows, where N is the corpus size and f is the frequency: (1) Besides actual words from the base dictionary, the lexicon contains all hanzi in the Big 5 Chinese code/ with their pronunciation(s), plus entries for other characters that can be found in Chinese text, such as Roman letters, numerals, and special symbols. |
Due to many similarly performing systems, the author was not able to draw strong conclusions on the question of correlation of manual and automatic evaluation metrics. | 0 | Almost all annotators expressed their preference to move to a ranking-based evaluation in the future. |
They showed better grammars to improve performance on both morphological and syntactic tasks, providing support for the advantage of a joint framework over pipelined or factorized ones. | 0 | The treebank has two versions, v1.0 and v2.0, containing 5001 and 6501 sentences respectively. |
This paper discusses the Potsdam Commentary Corpus, a corpus of german assembeled by potsdam university. | 0 | For all these annotation taks, G¨otze developed a series of questions (essentially a decision tree) designed to lead the annotator to the ap propriate judgement. |
This paper talks about KenLM: Faster and Smaller Language Model Queries. | 0 | We compare three hash tables: our probing implementation, GCC’s hash set, and Boost’s8 unordered. |
They showed better grammars to improve performance on both morphological and syntactic tasks, providing support for the advantage of a joint framework over pipelined or factorized ones. | 0 | We use a patched version of BitPar allowing for direct input of probabilities instead of counts. |
This assumption, however, is not inherent to type-based tagging models. | 0 | While Berg-Kirkpatrick et al. |
They proposed an unsupervised method to discover paraphrases from a large untagged corpus. | 0 | These results are promising and there are several avenues for improving on these results. |
The authors cluster NE instance pairs based on the words in the context using bag-of-words methods. | 0 | When a company buys another company, a paying event can occur, but these two phrases do not indicate the same event. |
Using a treebank grammar, the data-driven lexicon, and a linguistically motivated unknown-tokens handling technique their model outperformed previous pipelined, integrated or factorized systems for Hebrew morphological and syntactic processing, yielding an error reduction of 12% over the best published results so far. | 0 | We further thank Khalil Simaan (ILLCUvA) for his careful advise concerning the formal details of the proposal. |
This paper presents a maximum entropy-based named entity recognizer (NER). | 0 | As can be seen in Table 4, our training data is a lot less than those used by MENE and IdentiFinder3. |
This paper discusses the Potsdam Commentary Corpus, a corpus of german assembeled by potsdam university. | 0 | All annotations are done with specific tools and in XML; each layer has its own DTD. |
Vijay-Shankar et all considered the structural descriptions produced by various grammatical formalisms in terms of the complexity of the paths and the relationship between paths in the sets of structural descriptions that each system can generate. | 0 | Thus, TAG's can not give analyses in which dependencies between arbitrarily large branches exist. |
the "Potsdam Commentary Corpus" or PCC consists of 170 commentaries from Ma¨rkische Allgemeine Zeitung, a German regional daily. | 0 | For the âcoreâ portion of PCC, we found that on average, 35% of the coherence relations in our RST annotations are explicitly signalled by a lexical connective.6 When adding the fact that connectives are often ambiguous, one has to conclude that prospects for an automatic analysis of rhetorical structure using shallow methods (i.e., relying largely on connectives) are not bright â but see Sections 3.2 and 3.3 below. |
However, using the top-level semantic classes of WordNet proved to be problematic as the class distinctions are too coarse. | 0 | DempsterShafer handles this by re-normalizing all the belief values with respect to only the non-null sets (this is the purpose of the denominator in Equation 1). |
The problem of coreference resolution has received considerable attention, including theoretical discourse models and supervised machine learning systems. | 0 | The anaphor and antecedent appear in boldface. |
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text. | 0 | Note that Wang, Li, and Chang's. |
The PROBING data structure uses linear probing hash tables and is designed for speed. | 0 | We call these N − 1 words state. |
They believe that global context is useful in most languages, as it is a natural tendency for authors to use abbreviations on entities already mentioned previously. | 0 | We will refer to our system as MENERGI (Maximum Entropy Named Entity Recognizer using Global Information). |
This paper talks about Unsupervised Models for Named Entity Classification. | 0 | (Brin 98) ,describes a system for extracting (author, book-title) pairs from the World Wide Web using an approach that bootstraps from an initial seed set of examples. |
This paper discusses the Potsdam Commentary Corpus, a corpus of german assembeled by potsdam university. | 0 | Either save money at any cost - or give priority to education. |
The authors show that PATB is similar to other tree-banks but that annotation consistency remains low. | 0 | For alif with hamza, normalization can be seen as another level of devocalization. |
Their empirical results demonstrate that the type-based tagger rivals state-of-the-art tag-level taggers which employ more sophisticated learning mechanisms to exploit similar constraints. | 0 | 1 61.2 43. |
One can trivially create situations in which strictly binary-branching trees are combined to create a tree with only the root node and the terminal nodes, a completely flat structure. | 0 | IL+-1Proof: Assume a pair of crossing constituents appears in the output of the constituent voting technique using k parsers. |
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing. | 0 | Until now, all evaluations of Arabic parsingâincluding the experiments in the previous sectionâhave assumed gold segmentation. |
This paper talks about KenLM: Faster and Smaller Language Model Queries. | 0 | The BerkeleyLM direct-mapped cache is in principle faster than caches implemented by RandLM and by IRSTLM, so we may write a C++ equivalent implementation as future work. |
They incorporated instance-weighting into a mixture-model framework, and found that it yielded consistent improvements over a wide range of baselines. | 0 | To approximate these baselines, we implemented a very simple sentence selection algorithm in which parallel sentence pairs from OUT are ranked by the perplexity of their target half according to the IN language model. |
Their work is closely related to recent approaches that incorporate the sparsity constraint into the POS induction process. | 0 | We have presented a method for unsupervised part- of-speech tagging that considers a word type and its allowed POS tags as a primary element of the model. |
This paper talks about Unsupervised Models for Named Entity Classification. | 0 | (4) gives In order to minimize Zt, at each iteration the final algorithm should choose the weak hypothesis (i.e., a feature xt) which has values for W+ and W_ that minimize Equ. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.