source_text
stringlengths 27
368
| label
int64 0
1
| target_text
stringlengths 1
5.38k
|
---|---|---|
This assumption, however, is not inherent to type-based tagging models. | 0 | Model components cascade, so the row corresponding to +FEATS also includes the PRIOR component (see Section 3). |
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text. | 0 | We can model this probability straightforwardly enough with a probabilistic version of the grammar just given, which would assign probabilities to the individual rules. |
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement. | 0 | On the other hand, in a translation system one probably wants to treat this string as a single dictionary word since it has a conventional and somewhat unpredictable translation into English. |
The code is open source, has minimal dependencies, and offers both C++ and Java interfaces for integration. | 0 | Each trie node is individually allocated and full 64-bit pointers are used to find them, wasting memory. |
In order to create good-sized vectors for similarity calculation, they had to set a high frequency threshold. | 0 | Extract NE instance pairs with contexts First, we extract NE pair instances with their context from the corpus. |
Here both parametric and non-parametric models are explored. | 0 | Lemma: If the number of votes required by constituent voting is greater than half of the parsers under consideration the resulting structure has no crossing constituents. |
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement. | 0 | This larger corpus was kindly provided to us by United Informatics Inc., R.O.C. a set of initial estimates of the word frequencies.9 In this re-estimation procedure only the entries in the base dictionary were used: in other words, derived words not in the base dictionary and personal and foreign names were not used. |
Their results suggested that it was possible to learn accurate POS taggers for languages which did not have any annotated data, but have translations into a resource-rich language. | 0 | It falls short of the “Projection” baseline for German, but is statistically indistinguishable in terms of accuracy. |
Koehn and Monz carried out an extensive manual and automatic evaluation of machine translation performance on European language pairs. | 0 | The average fluency judgement per judge ranged from 2.33 to 3.67, the average adequacy judgement ranged from 2.56 to 4.13. |
The code is open source, has minimal dependencies, and offers both C++ and Java interfaces for integration. | 0 | Therefore, we want state to encode the minimum amount of information necessary to properly compute language model scores, so that the decoder will be faster and make fewer search errors. |
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs. | 0 | The contextual role knowledge that BABAR uses for coreference resolution is derived from this caseframe data. |
There is no global pruning. | 0 | We use a solution to this problem similar to the one presented in (Och et al., 1999), where target words are joined during training. |
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers. | 0 | Furthermore, even the size of the dictionary per se is less important than the appropriateness of the lexicon to a particular test corpus: as Fung and Wu (1994) have shown, one can obtain substantially better segmentation by tailoring the lexicon to the corpus to be segmented. |
This paper conducted research in the area of automatic paraphrase discovery. | 0 | D o m ai n Li n k ac cu ra cy W N c o v e r a g e C C 7 3 . 3 % 2 / 1 1 P C 8 8 . 9 % 2 / 8 Table 2. |
Finally, several coreference systems have successfully incorporated anaphoricity determination modules. | 0 | Our motivation for using DempsterShafer is that it provides a well-principled framework for combining evidence from multiple sources with respect to competing hypotheses. |
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers. | 0 | This is not ideal for some applications, however. |
In this paper, Ben and Riloff present a coreference resolver called BABAR that focuses on the use of contextual-role knowledge for coreference resolution. | 0 | âThe gunâ will be extracted by the caseframe âfired <patient>â. |
From a DP-based solution to the traveling salesman problem, they present a novel technique to restrict the possible word reordering between source and target language in order to achieve an eÃcient search algorithm. | 0 | When aligning the words in parallel texts (for language pairs like SpanishEnglish, French-English, ItalianGerman,...), we typically observe a strong localization effect. |
In this paper the author evaluates machine translation performance for six European language pairs that participated in a shared task: translating French, German, Spanish texts to English and back. | 0 | Pairwise comparison: We can use the same method to assess the statistical significance of one system outperforming another. |
The first method builds on results from (Yarowsky 95) and (Blum and Mitchell 98). | 0 | The following features were used: full-string=x The full string (e.g., for Maury Cooper, full- s tring=Maury_Cooper). contains(x) If the spelling contains more than one word, this feature applies for any words that the string contains (e.g., Maury Cooper contributes two such features, contains (Maury) and contains (Cooper) . allcapl This feature appears if the spelling is a single word which is all capitals (e.g., IBM would contribute this feature). allcap2 This feature appears if the spelling is a single word which is all capitals or full periods, and contains at least one period. |
It is probably the first analysis of Arabic parsing of this kind. | 0 | Table 3: Dev set frequencies for the two most significant discourse markers in Arabic are skewed toward analysis as a conjunction. |
Most IE researchers have been creating paraphrase knowledge by hand and specific tasks. | 0 | As lower frequency examples include noise, we set a threshold that an NE category pair should appear at least 5 times to be considered and an NE instance pair should appear at least twice to be considered. |
The manual evaluation of scoring translation on a graded scale from 1–5 seemed to be very hard to perform. | 0 | Our initial experimentation with the evaluation tool showed that this is often too overwhelming. |
The second algorithm builds on a boosting algorithm called AdaBoost. | 0 | . |
Through parser combination we have reduced the precision error rate by 30% and the recall error rate by 6% compared to the best previously published result. | 0 | Recently, combination techniques have been investigated for part of speech tagging with positive results (van Halteren et al., 1998; Brill and Wu, 1998). |
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs. | 0 | 4 Evaluation Results. |
On the basis of that observation, they described a class of formalisms which they called Linear Context- Free Rewriting Systems (LCFRs), and showed they were recognizable in polynomial time and generated only semilinear languages. | 0 | 0n0'i'i0'2"bin242bn I n = 711 + n2 } On the other hand, no linguistic use is made of this general form of composition and Steedman (personal communication) and Steedman (1986) argues that a more limited definition of composition is more natural. |
These clusters are computed using an SVD variant without relying on transitional structure. | 0 | We hypothesize that modeling morphological information will greatly constrain the set of possible tags, thereby further refining the representation of the tag lexicon. |
The approach has been successfully tested on the 8 000-word Verbmobil task. | 0 | The sequence of states needed to carry out the word reordering example in Fig. |
A large number of rules are needed for coverage of the domain, suggesting that a fairly large number of labeled examples should be required to train a classifier. | 0 | Output of the learning algorithm: a function h:Xxy [0, 1] where h(x, y) is an estimate of the conditional probability p(y1x) of seeing label y given that feature x is present. |
An extended lexicon model is defined, and its likelihood is compared to a baseline lexicon model, which takes only single-word dependencies into account. | 0 | (S; C; j); Not only the coverage set C and the positions j; j0, but also the verbgroup states S; S0 are taken into account. |
The code is open source, has minimal dependencies, and offers both C++ and Java interfaces for integration. | 0 | This differs from other implementations (Stolcke, 2002; Pauls and Klein, 2011) that use hash tables as nodes in a trie, as explained in the next section. |
Due to many similarly performing systems, the author was not able to draw strong conclusions on the question of correlation of manual and automatic evaluation metrics. | 0 | The bootstrap method has been critized by Riezler and Maxwell (2005) and Collins et al. (2005), as being too optimistic in deciding for statistical significant difference between systems. |
the "Potsdam Commentary Corpus" or PCC consists of 170 commentaries from Ma¨rkische Allgemeine Zeitung, a German regional daily. | 0 | Commentaries argue in favor of a specific point of view toward some political issue, often dicussing yet dismissing other points of view; therefore, they typically offer a more interesting rhetorical structure than, say, narrative text or other portions of newspapers. |
The fact that projective dependency parsers can never exactly reproduce the analyses found in non-projective tree banks is often neglected because of the relative scarcity of problematic constructions. | 0 | With respect to exact match, the improvement is even more noticeable, which shows quite clearly that even if non-projective dependencies are rare on the token level, they are nevertheless important for getting the global syntactic structure correct. |
They showed that it was useful to abstract away from the details of the formalism, and examine the nature of their derivation process as reflected by properties their trees, find that several of the formalisms considered can be seen as being closely related since they have derivation tree sets with the same structure as those produced by Context-Free Grammars. | 0 | We address the question of whether or not a formalism can generate only structural descriptions with independent paths. |
The texts were annotated with the RSTtool. | 0 | The web-based Annis imports data in a variety of XML formats and tagsets and displays it in a tier-orientedway (optionally, trees can be drawn more ele gantly in a separate window). |
They focused on phrases which two Named Entities, and proceed in two stages. | 0 | Here, an NE instance pair is any pair of NEs separated by at most 4 syntactic chunks; for example, âIBM plans to acquire Lotusâ. |
We chose one of four labels for each example: location, person, organization, or noise where the noise category was used for items that were outside the three categories. | 0 | Before describing the unsupervised case we first describe the supervised version of the algorithm: Input to the learning algorithm: n labeled examples of the form (xi, y„). y, is the label of the ith example (given that there are k possible labels, y, is a member of y = {1 ... 0). xi is a set of mi features {x,1, Xi2 . |
This corpus has several advantages: it is annotated at different levels. | 0 | ⢠Bridging links: the annotator is asked to specify the type as part-whole, cause-effect (e.g., She had an accident. |
In this paper, the authors are of the opinion that the sequence models-based approaches usually treat token-level tag assignment as the primary latent variable. | 0 | 8 1 2. |
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names. | 0 | Again, famous place names will most likely be found in the dictionary, but less well-known names, such as 1PM± R; bu4lang3-shi4wei2-ke4 'Brunswick' (as in the New Jersey town name 'New Brunswick') will not generally be found. |
The AdaBoost algorithm was developed for supervised learning. | 0 | Pseudo-labels are formed by taking seed labels on the labeled examples, and the output of the fixed classifier on the unlabeled examples. |
They have made use of local and global features to deal with the instances of same token in a document. | 0 | â). |
They extended previous work on discriminative weighting by using a finer granularity, focusing on the properties of instances rather than corpus components, and used simpler training procedure. | 0 | This is a simple and effective alternative to setting weights discriminatively to maximize a metric such as BLEU. |
Human judges also pointed out difficulties with the evaluation of long sentences. | 0 | The manual scores are averages over the raw unnormalized scores. |
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words. | 0 | In this paper we present a stochastic finite-state model for segmenting Chinese text into words, both words found in a (static) lexicon as well as words derived via the above-mentioned productive processes. |
In this paper, the authors are of the opinion that the sequence models-based approaches usually treat token-level tag assignment as the primary latent variable. | 0 | The second model (+PRIOR) utilizes the independent prior over type-level tag assignments P (T |Ï). |
They believe that global context is useful in most languages, as it is a natural tendency for authors to use abbreviations on entities already mentioned previously. | 0 | The features we used can be divided into 2 classes: local and global. |
This paper presents methods to query N-gram language models, minimizing time and space costs. | 0 | Otherwise, the scope of the search problem shrinks recursively: if A[pivot] < k then this becomes the new lower bound: l +— pivot; if A[pivot] > k then u +— pivot. |
On the basis of that observation, they described a class of formalisms which they called Linear Context- Free Rewriting Systems (LCFRs), and showed they were recognizable in polynomial time and generated only semilinear languages. | 0 | The work of Rounds (1969) shows that the path sets of trees derived by IG's (like those of TAG's) are context-free languages. |
Here we show how non-projective dependency parsing can be achieved by combining a data driven projective parser with special graph transformation techniques. | 0 | Pseudo-Projective Dependency Parsing |
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration. | 0 | J:j:l :zhongl :0.0 ;m,Jlong4 :0.0 (mHHaryg9tltHBI) £: _ADV: 5.98 ¥ :hua2:o.o E :_NC: 4.41 :mln2:o.o mm : guo2 : 0.0 (RopubllcofChlna) .....,. |
There is no global pruning. | 0 | Word Re-ordering and DP-based Search in Statistical Machine Translation |
Evaluation was done automatically using the BLEU score and manually on fluency and adequacy. | 0 | We assumed that such a contrastive assessment would be beneficial for an evaluation that essentially pits different systems against each other. |
The texts were annotated with the RSTtool. | 0 | In the small window on the left, search queries can be entered, here one for an NP that has been annotated on the co-reference layer as bridging. |
Vijay-Shankar et all considered the structural descriptions produced by various grammatical formalisms in terms of the complexity of the paths and the relationship between paths in the sets of structural descriptions that each system can generate. | 0 | The productions of HG's are very similar to those of CFG's except that the operation used must be made explicit. |
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words. | 0 | In the pinyin transliterations a dash(-) separates syllables that may be considered part of the same phonological word; spaces are used to separate plausible phonological words; and a plus sign (+) is used, where relevant, to indicate morpheme boundaries of interest. |
The features were weighted within a logistic model that gave an overall weight that was applied to the phrase pair and MAP-smoothed relative-frequency estimates which were combined linearly with relative-frequency estimates from an in-domain phrase table. | 0 | It is also worth pointing out a connection with Daum´e’s (2007) work that splits each feature into domain-specific and general copies. |
It is well-known that English constituency parsing models do not generalize to other languages and treebanks. | 0 | How do additional ambiguities caused by devocalization affect statistical learning? |
This paper presents a maximum entropy-based named entity recognizer (NER). | 0 | Local features are features that are based on neighboring tokens, as well as the token itself. |
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement. | 0 | Particular relations are also consistent with particular hypotheses about the segmentation of a given sentence, and the scores for particular relations can be incremented or decremented depending upon whether the segmentations with which they are consistent are "popular" or not. |
From a DP-based solution to the traveling salesman problem, they present a novel technique to restrict the possible word reordering between source and target language in order to achieve an eÃcient search algorithm. | 0 | 13. |
From a DP-based solution to the traveling salesman problem, they present a novel technique to restrict the possible word reordering between source and target language in order to achieve an eÃcient search algorithm. | 0 | For a trigram language model, the partial hypotheses are of the form (e0; e; C; j). |
In order to create good-sized vectors for similarity calculation, they had to set a high frequency threshold. | 0 | In order to solve this problem, a parse tree is needed to understand that âLotusâ is not the object of âestimatesâ. |
Two general approaches are presented and two combination techniques are described for each approach. | 0 | • Similarly, when the naïve Bayes classifier is configured such that the constituents require estimated probabilities strictly larger than 0.5 to be accepted, there is not enough probability mass remaining on crossing brackets for them to be included in the hypothesis. |
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing. | 0 | 27 80. |
They showed the efficacy of graph-based label propagation for projecting part-of-speech information across languages. | 0 | Overall, it gives improvements ranging from 1.1% for German to 14.7% for Italian, for an average improvement of 8.3% over the unsupervised feature-HMM model. |
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models. | 0 | 25 16. |
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words. | 0 | An example is in (i), where the system fails to group t;,f;?"$?t!: lin2yang2gang3 as a name, because all three hanzi can in principle be separate words (t;,f; lin2 'wood';?"$ yang2 'ocean'; ?t!; gang3 'harbor'). |
They showed that it was useful to abstract away from the details of the formalism, and examine the nature of their derivation process as reflected by properties their trees, find that several of the formalisms considered can be seen as being closely related since they have derivation tree sets with the same structure as those produced by Context-Free Grammars. | 0 | In the remainder of the paper, we outline how a class of Linear Context-Free Rewriting Systems (LCFRS's) may be defined and sketch how semilinearity and polynomial recognition of these systems follows. |
In this paper, the authors are of the opinion that the sequence models-based approaches usually treat token-level tag assignment as the primary latent variable. | 0 | However, these approaches are ill-equipped to directly represent type-based constraints such as sparsity. |
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models. | 0 | In (2a), we want to split the two morphemes since the correct analysis is that we have the adverb :1 cai2 'just,' the modal verb neng2 'be able' and the main verb R: Hke4fu2 'overcome'; the competing analysis is, of course, that we have the noun :1 cai2neng2 'talent,' followed by }'lijke4fu2 'overcome.' |
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement. | 0 | com §Cambridge, UK Email: [email protected] © 1996 Association for Computational Linguistics (a) B ) ( , : & ; ? ' H o w d o y o u s a y o c t o p u s i n J a p a n e s e ? ' (b) P l a u s i b l e S e g m e n t a t i o n I B X I I 1 : & I 0 0 r i 4 w e n 2 z h a n g l y u 2 z e n 3 m e 0 s h u o l ' J a p a n e s e ' ' o c t o p u s ' ' h o w ' ' s a y ' (c) Figure 1 I m p l a u s i b l e S e g m e n t a t i o n [§] lxI 1:&I ri4 wen2 zhangl yu2zen3 me0 shuol 'Japan' 'essay' 'fish' 'how' 'say' A Chinese sentence in (a) illustrating the lack of word boundaries. |
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing. | 0 | We also mark all nodes that dominate an SVO configuration (containsSVO). |
They believe that global context is useful in most languages, as it is a natural tendency for authors to use abbreviations on entities already mentioned previously. | 0 | On the MUC6 data, Bikel et al. |
The bias of automatic methods in favour of statistical systems seemed to be less pronounced on out-of-domain test data. | 0 | At the very least, we are creating a data resource (the manual annotations) that may the basis of future research in evaluation metrics. |
There are clustering approaches that assign a single POS tag to each word type. | 0 | 0 57.3 51. |
It is annotated with several data: morphology, syntax, rhetorical structure, connectors, correference and informative structure. | 0 | structure Besides the applications just sketched, the over- arching goal of developing the PCC is to build up an empirical basis for investigating phenomena of discourse structure. |
The texts were annotated with the RSTtool. | 0 | The Potsdam Commentary Corpus |
Instance-weighting approach improved over a wide range of baselines, giving gains of over 2 BLEU points over the best non-adapted baseline. | 0 | The number of top-ranked pairs to retain is chosen to optimize dev-set BLEU score. |
This paper talks about Exploiting Diversity in Natural Language Processing: Combining Parsers. | 0 | There is a guarantee of no crossing brackets but there is no guarantee that a constituent in the tree has the same children as it had in any of the three original parses. |
The PROBING data structure uses linear probing hash tables and is designed for speed. | 0 | Process statistics are already collected by the kernel (and printing them has no meaningful impact on performance). |
This topic has been getting more attention, driven by the needs of various NLP applications. | 0 | We picked two domains, the CC-domain and the âPerson â Companyâ domain (PC-domain), for the evaluation, as the entire system output was too large to evaluate. |
All the texts were annotated by two people. | 0 | And indeed, converging on annotation guidelines is even more difficult than it is with co-reference. |
A beam search concept is applied as in speech recognition. | 0 | 4. |
Across eight European languages, their approach results in an average absolute improvement of 10.4% over a state-of-the-art baseline, and 16.7% over vanilla hidden Markov models induced with the Expectation Maximization algorithm. | 0 | Because the information flow in our graph is asymmetric (from English to the foreign language), we use different types of vertices for each language. |
Human judges also pointed out difficulties with the evaluation of long sentences. | 0 | Systems that generally do better than others will receive a positive average normalizedjudgement per sentence. |
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs. | 0 | In our situation, the competing hypotheses are the possible antecedents for an anaphor. |
Each out-of-domain phrase pair was characterized by a set of simple features intended to reflect how useful it would be. | 0 | The original OUT counts co(s, t) are weighted by a logistic function wλ(s, t): To motivate weighting joint OUT counts as in (6), we begin with the “ideal” objective for setting multinomial phrase probabilities 0 = {p(s|t), dst}, which is the likelihood with respect to the true IN distribution pi(s, t). |
They have made use of local and global features to deal with the instances of same token in a document. | 0 | The importance of dictionaries in NERs has been investigated in the literature (Mikheev et al., 1999). |
They plan on extending instance-weighting to other standard SMT components and capture the degree of generality of phrase pairs. | 0 | For the LM, adaptive weights are set as follows: where α is a weight vector containing an element αi for each domain (just IN and OUT in our case), pi are the corresponding domain-specific models, and ˜p(w, h) is an empirical distribution from a targetlanguage training corpus—we used the IN dev set for this. |
The authors show that PATB is similar to other tree-banks but that annotation consistency remains low. | 0 | Lexicalizing several POS tags improves performance. |
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text. | 0 | The use of weighted transducers in particular has the attractive property that the model, as it stands, can be straightforwardly interfaced to other modules of a larger speech or natural language system: presumably one does not want to segment Chinese text for its own sake but instead with a larger purpose in mind. |
They used graph-based label propagation for cross-lingual knowledge transfer and used the projected labels as features in an unsupervised model. | 0 | The first term in the objective function is the graph smoothness regularizer which encourages the distributions of similar vertices (large wij) to be similar. |
In this paper the author evaluates machine translation performance for six European language pairs that participated in a shared task: translating French, German, Spanish texts to English and back. | 0 | The evaluation framework for the shared task is similar to the one used in last year’s shared task. |
This paper talks about Unsupervised Models for Named Entity Classification. | 0 | We can now add a new weak hypothesis 14 based on a feature in X1 with a confidence value al hl and atl are chosen to minimize the function We now define, for 1 <i <n, the following virtual distribution, As before, Ztl is a normalization constant. |
One can trivially create situations in which strictly binary-branching trees are combined to create a tree with only the root node and the terminal nodes, a completely flat structure. | 1 | One can trivially create situations in which strictly binary-branching trees are combined to create a tree with only the root node and the terminal nodes, a completely flat structure. |
Their method did not assume any knowledge about the target language, making it applicable to a wide array of resource-poor languages. | 0 | All features were conjoined with the state z. |
This assumption, however, is not inherent to type-based tagging models. | 0 | However, these approaches are ill-equipped to directly represent type-based constraints such as sparsity. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.