source_text
stringlengths 27
368
| label
int64 0
1
| target_text
stringlengths 1
5.38k
|
---|---|---|
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs. | 0 | Finally, Section 5 explains how BABAR relates to previous work, and Section 6 summarizes our conclusions. |
From a DP-based solution to the traveling salesman problem, they present a novel technique to restrict the possible word reordering between source and target language in order to achieve an eÃcient search algorithm. | 0 | In this paper, we describe a search procedure for statistical machine translation (MT) based on dynamic programming (DP). |
Their empirical results demonstrate that the type-based tagger rivals state-of-the-art tag-level taggers which employ more sophisticated learning mechanisms to exploit similar constraints. | 0 | The P (T |Ï) distribution, in English for instance, should have very low mass for the DT (determiner) tag, since determiners are a very small portion of the vocabulary. |
Here both parametric and non-parametric models are explored. | 0 | Both authors are members of the Center for Language and Speech Processing at Johns Hopkins University. |
They focused on phrases which two Named Entities, and proceed in two stages. | 0 | Extract NE pair instances with contexts From the four years of newspaper corpus, we extracted 1.9 million pairs of NE instances. |
The TRIE model uses less memory than the smallest lossless alternative and is still faster than SRILM. | 0 | These make left-to-right query patterns convenient, as the application need only provide a state and the word to append, then use the returned state to append another word, etc. |
The texts were annotated with the RSTtool. | 0 | 2.6 Co-reference. |
Due to many similarly performing systems, the author was not able to draw strong conclusions on the question of correlation of manual and automatic evaluation metrics. | 0 | Lack of correct reference translations was pointed out as a short-coming of our evaluation. |
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text. | 0 | Methods for expanding the dictionary include, of course, morphological rules, rules for segmenting personal names, as well as numeral sequences, expressions for dates, and so forth (Chen and Liu 1992; Wang, Li, and Chang 1992; Chang and Chen 1993; Nie, Jin, and Hannan 1994). |
The authors cluster NE instance pairs based on the words in the context using bag-of-words methods. | 0 | They first collect the NE instance pairs and contexts, just like our method. |
This paper offers a broad insight into of Arabic constituency parsing by analyzing the interplay of linguistic phenomena, annotation choices, and model design. | 0 | Arabic sentences of up to length 63 would need to be. |
On the basis of that observation, they described a class of formalisms which they called Linear Context- Free Rewriting Systems (LCFRs), and showed they were recognizable in polynomial time and generated only semilinear languages. | 0 | Thus, the derivation trees for TAG's have the same structure as local sets. |
Nevertheless, only a part of this corpus (10 texts), which the authors name "core corpus", is annotated with all this information. | 0 | The motivation for our more informal approach was the intuition that there are so many open problems in rhetorical analysis (and more so for German than for English; see below) that the main task is qualitative investigation, whereas rigorous quantitative analyses should be performed at a later stage. |
Their method did not assume any knowledge about the target language, making it applicable to a wide array of resource-poor languages. | 0 | On the English side, however, the vertices (denoted by Ve) correspond to word types. |
Their method did not assume any knowledge about the target language, making it applicable to a wide array of resource-poor languages. | 0 | Naseem et al. (2009) and Snyder et al. |
An extended lexicon model is defined, and its likelihood is compared to a baseline lexicon model, which takes only single-word dependencies into account. | 0 | We have tested the translation system on the Verbmobil task (Wahlster 1993). |
The authors show that PATB is similar to other tree-banks but that annotation consistency remains low. | 0 | But we follow the more direct adaptation of Evalb suggested by Tsarfaty (2006), who viewed exact segmentation as the ultimate goal. |
This paper talks about Unsupervised Models for Named Entity Classification. | 0 | For a description of the application of AdaBoost to various NLP problems see the paper by Abney, Schapire, and Singer in this volume. |
An extended lexicon model is defined, and its likelihood is compared to a baseline lexicon model, which takes only single-word dependencies into account. | 0 | In the second and third translation examples, the IbmS word reordering performs worse than the QmS word reordering, since it can not take properly into account the word reordering due to the German verbgroup. |
This paper discusses the Potsdam Commentary Corpus, a corpus of german assembeled by potsdam university. | 0 | This was also inspired by the work on the Penn Discourse Tree Bank7 , which follows similar goals for English. |
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance. | 0 | The family name set is restricted: there are a few hundred single-hanzi family names, and about ten double-hanzi ones. |
This paper talks about KenLM: Faster and Smaller Language Model Queries. | 0 | The index in this array is the vocabulary identifier. |
Using a treebank grammar, the data-driven lexicon, and a linguistically motivated unknown-tokens handling technique their model outperformed previous pipelined, integrated or factorized systems for Hebrew morphological and syntactic processing, yielding an error reduction of 12% over the best published results so far. | 0 | Finally, the assignments of PoS tags to OOV segments is subject to language specific constraints relative to the token it was originated from. |
They have made use of local and global features to deal with the instances of same token in a document. | 0 | If is a number string (such as one, two, etc), then the feature NumberString is set to 1. |
The authors in this paper describe a search procedure for statistical machine translation (MT) based on dynamic programming (DP). | 0 | There are 13 types of extensions needed to describe the verbgroup reordering. |
Instance-weighting approach improved over a wide range of baselines, giving gains of over 2 BLEU points over the best non-adapted baseline. | 0 | Finally, we intend to explore more sophisticated instanceweighting features for capturing the degree of generality of phrase pairs. |
This corpus has several advantages: it is annotated at different levels. | 0 | 2.4 Underspecified rhetorical structure. |
It is annotated with several data: morphology, syntax, rhetorical structure, connectors, correference and informative structure. | 0 | The PCC is not the result of a funded project. |
We chose one of four labels for each example: location, person, organization, or noise where the noise category was used for items that were outside the three categories. | 0 | 2. |
They used graph-based label propagation for cross-lingual knowledge transfer and used the projected labels as features in an unsupervised model. | 0 | Our oracles took advantage of the labeled treebanks: While we tried to minimize the number of free parameters in our model, there are a few hyperparameters that need to be set. |
In this paper, Ben and Riloff present a coreference resolver called BABAR that focuses on the use of contextual-role knowledge for coreference resolution. | 0 | Figure 4 shows the seven general knowledge sources (KSs) that represent features commonly used for coreference resolution. |
This paper talks about KenLM: Faster and Smaller Language Model Queries. | 0 | Exposing this information to the decoder will lead to better hypothesis recombination. |
For the perplexity and translation tasks, we used SRILM to build a 5-gram English language model on 834 million tokens from Europarl v6 (Koehn, 2005) and the 2011 Workshop on Machine Translation News Crawl corpus with duplicate lines removed. | 0 | It is generally considered to be fast (Pauls 29 − 1 probabilities and 2' − 2 non-zero backoffs. and Klein, 2011), with a default implementation based on hash tables within each trie node. |
The bias of automatic methods in favour of statistical systems seemed to be less pronounced on out-of-domain test data. | 0 | We check, how likely only up to k = 20 better scores out of n = 100 would have been generated by two equal systems, using the binomial distribution: If p(0..k; n, p) < 0.05, or p(0..k; n, p) > 0.95 then we have a statistically significant difference between the systems. |
Replacing this with a ranked evaluation seems to be more suitable. | 0 | We can check, what the consequences of less manual annotation of results would have been: With half the number of manual judgements, we can distinguish about 40% of the systems, 10% less. |
There are clustering approaches that assign a single POS tag to each word type. | 0 | The English data comes from the WSJ portion of the Penn Treebank and the other languages from the training set of the CoNLL-X multilingual dependency parsing shared task. |
They believe that global context is useful in most languages, as it is a natural tendency for authors to use abbreviations on entities already mentioned previously. | 0 | If is seen infrequently during training (less than a small count), then will not be selected as a feature and all features in this group are set to 0. |
This paper conducted research in the area of automatic paraphrase discovery. | 0 | More examples are shown in Figure 5. |
The code is open source, has minimal dependencies, and offers both C++ and Java interfaces for integration. | 0 | Our TRIE implementation is designed to improve upon IRSTLM using a reverse trie with improved search, bit level packing, and stateful queries. |
Their results suggested that it was possible to learn accurate POS taggers for languages which did not have any annotated data, but have translations into a resource-rich language. | 0 | 1). |
Finally, several coreference systems have successfully incorporated anaphoricity determination modules. | 1 | Several coreference resolvers have used supervised learning techniques, such as decision trees and rule learners (Aone and Bennett, 1995; McCarthy and Lehnert, 1995; Ng and Cardie, 2002; Soon et al., 2001). |
This assumption, however, is not inherent to type-based tagging models. | 0 | 2 60.7 56. |
The problem of coreference resolution has received considerable attention, including theoretical discourse models and supervised machine learning systems. | 0 | Section 2.1 describes how BABAR generates training examples to use in the learning process. |
This paper conducted research in the area of automatic paraphrase discovery. | 0 | However, there are phrases which express the same meanings even though they do not share the same keyword. |
This paper presents methods to query N-gram language models, minimizing time and space costs. | 0 | Interpolation search is therefore a form of binary search with better estimates informed by the uniform key distribution. |
It is well-known that English constituency parsing models do not generalize to other languages and treebanks. | 0 | yields.6 But to its great advantage, it has a high ratio of non-terminals/terminals (μ Constituents / μ Length). |
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names. | 0 | The interdependence between fb or 1/!i, and 5:2 is not captured by our model, but this could easily be remedied. |
There are clustering approaches that assign a single POS tag to each word type. | 0 | Specifically, we (+FEATS) utilizes the tag prior as well as features (e.g., suffixes and orthographic features), discussed in Section 3, for the P (W |T , Ï) component. |
The main result is that the combined system can recover non-projective dependencies with a precision sufficient to give a significant improvement in overall parsing accuracy, especially with respect to the exact match criterion, leading to the best reported performance for robust non-projective parsing of Czech. | 0 | Compared to related work on the recovery of long-distance dependencies in constituency-based parsing, our approach is similar to that of Dienes and Dubey (2003) in that the processing of non-local dependencies is partly integrated in the parsing process, via an extension of the set of syntactic categories, whereas most other approaches rely on postprocessing only. |
The authors cluster NE instance pairs based on the words in the context using bag-of-words methods. | 0 | Also there are cases where one of the two NEs belong to a phrase outside of the relation. |
It is probably the first analysis of Arabic parsing of this kind. | 0 | After adding a ROOT node to all trees, we train a grammar using six split-and- merge cycles and no Markovization. |
Replacing this with a ranked evaluation seems to be more suitable. | 0 | Sentences and systems were randomly selected and randomly shuffled for presentation. |
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models. | 0 | There are thus some very good reasons why segmentation into words is an important task. |
There are clustering approaches that assign a single POS tag to each word type. | 0 | 9 61.0 44. |
This paper talks about Exploiting Diversity in Natural Language Processing: Combining Parsers. | 0 | The maximum precision oracle is an upper bound on the possible gain we can achieve by parse hybridization. |
The manual evaluation of scoring translation on a graded scale from 1â5 seems to be very hard to perform. | 0 | If two systems’ scores are close, this may simply be a random effect in the test data. |
The bias of automatic methods in favour of statistical systems seemed to be less pronounced on out-of-domain test data. | 0 | Another way to view the judgements is that they are less quality judgements of machine translation systems per se, but rankings of machine translation systems. |
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers. | 0 | Previous reports on Chinese segmentation have invariably cited performance either in terms of a single percent-correct score, or else a single precision-recall pair. |
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration. | 0 | Another question that remains unanswered is to what extent the linguistic information he considers can be handled-or at least approximated-by finite-state language models, and therefore could be directly interfaced with the segmentation model that we have presented in this paper. |
The second algorithm builds on a boosting algorithm called AdaBoost. | 0 | In addition to the named-entity string (Maury Cooper or Georgia), a contextual predictor was also extracted. |
The texts were annotated with the RSTtool. | 0 | For illustration, an English translation of one of the commentaries is given in Figure 1. |
In this paper, Ben and Riloff present a coreference resolver called BABAR that focuses on the use of contextual-role knowledge for coreference resolution. | 0 | The SynRole KS computes the relative frequency with which the candidatesâ syntactic role (subject, direct object, PP object) appeared in resolutions in the training set. |
The approach has been successfully tested on the 8 000-word Verbmobil task. | 0 | This algorithm can be applied to statistical machine translation. |
One can trivially create situations in which strictly binary-branching trees are combined to create a tree with only the root node and the terminal nodes, a completely flat structure. | 0 | These three parsers have given the best reported parsing results on the Penn Treebank Wall Street Journal corpus (Marcus et al., 1993). |
Foster et all describe a new approach to SMT adaptation that weights out-of-domain phrase pairs according to their relevance to the target domain, determined by both how similar to it they appear to be, and whether they belong to general language or not. | 0 | For instance, the sentence Similar improvements in haemoglobin levels were reported in the scientific literature for other epoetins would likely be considered domain-specific despite the presence of general phrases like were reported in. |
This assumption, however, is not inherent to type-based tagging models. | 0 | The tokens w are generated by token-level tags t from an HMM parameterized by the lexicon structure. |
They showed the efficacy of graph-based label propagation for projecting part-of-speech information across languages. | 0 | Furthermore, we do not connect the English vertices to each other, but only to foreign language vertices.4 The graph vertices are extracted from the different sides of a parallel corpus (De, Df) and an additional unlabeled monolingual foreign corpus Ff, which will be used later for training. |
Explanations for this phenomenon are relative informativeness of lexicalization, insensitivity to morphology and the effect of variable word order and these factors lead to syntactic disambiguation. | 0 | (b) POS tagging accuracy is lowest for maSdar verbal nouns (VBG,VN) and adjectives (e.g., JJ). |
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text. | 0 | Table 1 The cost as a novel given name (second position) for hanzi from various radical classes. |
This paper talks about KenLM: Faster and Smaller Language Model Queries. | 0 | SRILM (Stolcke, 2002) is widely used within academia. |
The bias of automatic methods in favour of statistical systems seemed to be less pronounced on out-of-domain test data. | 0 | Participants were also provided with two sets of 2,000 sentences of parallel text to be used for system development and tuning. |
The authors in this paper describe a search procedure for statistical machine translation (MT) based on dynamic programming (DP). | 0 | The advantage is that we can recombine search hypotheses by dynamic programming. |
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names. | 0 | att. |
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance. | 0 | 4.3 Morphological Analysis. |
In this paper, Ben and Riloff present a coreference resolver called BABAR that focuses on the use of contextual-role knowledge for coreference resolution. | 0 | 3 The Coreference Resolution Model. |
The experimental tests are carried out on the Verbmobil task which is a limited-domain spoken-language task. | 0 | For a trigram language model, the partial hypotheses are of the form (e0; e; C; j). |
It is probably the first analysis of Arabic parsing of this kind. | 0 | Arabic sentences of up to length 63 would need to be. |
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs. | 0 | We have developed a coreference resolver called BABAR that uses contextual role knowledge to make coreference decisions. |
It is well-known that English constituency parsing models do not generalize to other languages and treebanks. | 0 | To combat the proliferation of parsing edges, we prune the lattices according to a hand-constructed lexicon of 31 clitics listed in the ATB annotation guidelines (Maamouri et al., 2009a). |
However, these approaches are ill-equipped to directly represent type-based constraints such as sparsity. | 0 | Their best model yields 44.5% one-to-one accuracy, compared to our best median 56.5% result. |
The PROBING data structure uses linear probing hash tables and is designed for speed. | 0 | RandLM and SRILM also remove context that will not extend, but SRILM performs a second lookup in its trie whereas our approach has minimal additional cost. |
The main result is that the combined system can recover non-projective dependencies with a precision sufficient to give a significant improvement in overall parsing accuracy, especially with respect to the exact match criterion, leading to the best reported performance for robust non-projective parsing of Czech. | 0 | Even this may be nondeterministic, in case the graph contains several non-projective arcs whose lifts interact, but we use the following algorithm to construct a minimal projective transformation D0 = (W, A0) of a (nonprojective) dependency graph D = (W, A): The function SMALLEST-NONP-ARC returns the non-projective arc with the shortest distance from head to dependent (breaking ties from left to right). |
There are clustering approaches that assign a single POS tag to each word type. | 0 | 7 Conclusion and Future Work. |
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names. | 0 | However, it is almost universally the case that no clear definition of what constitutes a "correct" segmentation is given, so these performance measures are hard to evaluate. |
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement. | 0 | To this end, we picked 100 sentences at random containing 4,372 total hanzi from a test corpus.14 (There were 487 marks of punctuation in the test sentences, including the sentence-final periods, meaning that the average inter-punctuation distance was about 9 hanzi.) |
Due to many similarly performing systems, the author was not able to draw strong conclusions on the question of correlation of manual and automatic evaluation metrics. | 0 | Since different judges judged different systems (recall that judges were excluded to judge system output from their own institution), we normalized the scores. |
However, using the top-level semantic classes of WordNet proved to be problematic as the class distinctions are too coarse. | 0 | If the semantic expectations do not intersect, then we know that the case- frames extract mutually exclusive types of noun phrases. |
The bias of automatic methods in favour of statistical systems seemed to be less pronounced on out-of-domain test data. | 0 | Given a set of n sentences, we can compute the sample mean x� and sample variance s2 of the individual sentence judgements xi: The extend of the confidence interval [x−d, x+df can be computed by d = 1.96 ·�n (6) Pairwise Comparison: As for the automatic evaluation metric, we want to be able to rank different systems against each other, for which we need assessments of statistical significance on the differences between a pair of systems. |
This paper talks about Unsupervised Models for Named Entity Classification. | 0 | The method uses a "soft" measure of the agreement between two classifiers as an objective function; we described an algorithm which directly optimizes this function. |
The authors show that PATB is similar to other tree-banks but that annotation consistency remains low. | 0 | First, we identify sources of syntactic ambiguity understudied in the existing parsing literature. |
A large number of rules are needed for coverage of the domain, suggesting that a fairly large number of labeled examples should be required to train a classifier. | 0 | A generative model was applied (similar to naive Bayes) with the three labels as hidden vanables on unlabeled examples, and observed variables on (seed) labeled examples. |
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models. | 0 | Fortunately, we were able to obtain a copy of the full set of sentences from Chang et al. on which Wang, Li, and Chang tested their system, along with the output of their system.18 In what follows we will discuss all cases from this set where our performance on names differs from that of Wang, Li, and Chang. |
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models. | 0 | For example, given a sequence F1G1G2, where F1 is a legal single-hanzi family name, and Plural Nouns X g 0 g "' X X 0 T!i c"'. |
The contextual rules are restricted and may not be applicable to every example, but the spelling rules are generally applicable and should have good coverage. | 0 | Note, however, that there might be situations in which Zco in fact increases. |
In order to create good-sized vectors for similarity calculation, they had to set a high frequency threshold. | 0 | Figure 3 Figure 1. |
The use of global features has shown excellent result in the performance on MUC-6 and MUC-7 test data. | 0 | (2) Description Source Location Names http://www.timeanddate.com http://www.cityguide.travel-guides.com http://www.worldtravelguide.net Corporate Names http://www.fmlx.com Person First Names http://www.census.gov/genealogy/names Person Last Names Table 2: Sources of Dictionaries The McCann family . . . |
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing. | 0 | 12 For English, our Evalb implementation is identical to the most recent reference (EVALB20080701). |
It is well-known that English constituency parsing models do not generalize to other languages and treebanks. | 0 | For example, one of the ATB samples was the determiner -"" ; dhalikâthat.â The sample occurred in 1507 corpus po sitions, and we found that the annotations were consistent. |
Throughout this paper we compare with several packages: SRILM 1.5.12 (Stolcke, 2002) is a popular toolkit based on tries used in several decoders. | 0 | The developer explained that the loading process requires extra memory that it then frees. eBased on the ratio to SRI’s speed reported in Guthrie and Hepple (2010) under different conditions. |
The resulting model is compact, efficiently learnable and linguistically expressive. | 0 | token-level HMM to reflect lexicon sparsity. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.