source_text
stringlengths 27
368
| label
int64 0
1
| target_text
stringlengths 1
5.38k
|
---|---|---|
They showed better grammars to improve performance on both morphological and syntactic tasks, providing support for the advantage of a joint framework over pipelined or factorized ones. | 0 | In our model, however, all lattice paths are taken to be a-priori equally likely. |
The corpus was annoted with different linguitic information. | 0 | âested in correlations between prosody and dis course structure. |
It is probably the first analysis of Arabic parsing of this kind. | 0 | However, the learning curves in Figure 3 show that the Berkeley parser does not exceed our manual grammar by as wide a margin as has been shown for other languages (Petrov, 2009). |
Their empirical results demonstrate that the type-based tagger rivals state-of-the-art tag-level taggers which employ more sophisticated learning mechanisms to exploit similar constraints. | 0 | 3 68.4 49. |
There is no global pruning. | 0 | The search starts in hypothesis (f;g; 0) and ends in the hypotheses (f1; ; Jg; j), with j 2 f1; ; Jg. |
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs. | 0 | If âgunâ and ârevolverâ refer to the same object, then it should also be acceptable to say that Fred was âkilled with a gunâ and that the burglar âfireda revolverâ. |
They employed a PCFG-based generative framework to make both syntactic and morphological disambiguation decisions which were not only theoretically clean and linguistically justified but also probabilistically appropriate and empirically sound. | 0 | This essentially means that a better grammar tunes the joint model for optimized syntactic disambiguation at least in as much as their hyper parameters do. |
They believe that global context is useful in most languages, as it is a natural tendency for authors to use abbreviations on entities already mentioned previously. | 0 | For all lists except locations, the lists are processed into a list of tokens (unigrams). |
They incorporated instance-weighting into a mixture-model framework, and found that it yielded consistent improvements over a wide range of baselines. | 0 | Its success depends on the two domains being relatively close, and on the OUT corpus not being so large as to overwhelm the contribution of IN. |
They proposed a single joint model for performing both morphological segmentation and syntactic disambiguation which bypasses the associated circularity. | 0 | The aforementioned surface form bcl, for example, may also stand for the lexical item “onion”, a Noun. |
The resulting model is compact, efficiently learnable and linguistically expressive. | 0 | We hypothesize that modeling morphological information will greatly constrain the set of possible tags, thereby further refining the representation of the tag lexicon. |
This paper talks about Exploiting Diversity in Natural Language Processing: Combining Parsers. | 0 | The combining technique must act as a multi-position switch indicating which parser should be trusted for the particular sentence. |
This corpus has several advantages: it is annotated at different levels. | 0 | 3.5 Improved models of discourse. |
The authors cluster NE instance pairs based on the words in the context using bag-of-words methods. | 0 | The results, along with the total number of phrases, are shown in Table 1. |
Evaluation was done automatically using the BLEU score and manually on fluency and adequacy. | 0 | In addition to the Europarl test set, we also collected 29 editorials from the Project Syndicate website2, which are published in all the four languages of the shared task. |
The AdaBoost algorithm was developed for supervised learning. | 0 | (We would like to note though that unlike previous boosting algorithms, the CoBoost algorithm presented here is not a boosting algorithm under Valiant's (Valiant 84) Probably Approximately Correct (PAC) model.) |
Their results suggested that it was possible to learn accurate POS taggers for languages which did not have any annotated data, but have translations into a resource-rich language. | 0 | To optimize this function, we used L-BFGS, a quasi-Newton method (Liu and Nocedal, 1989). |
They plan on extending instance-weighting to other standard SMT components and capture the degree of generality of phrase pairs. | 0 | As mentioned above, it is not obvious how to apply Daum´e’s approach to multinomials, which do not have a mechanism for combining split features. |
Due to many similarly performing systems, the author was not able to draw strong conclusions on the question of correlation of manual and automatic evaluation metrics. | 0 | About half of the participants of last year’s shared task participated again. |
All the texts were annotated by two people. | 0 | Hence we decided to select ten commentaries to form a âcore corpusâ, for which the entire range of annotation levels was realized, so that experiments with multi-level querying could commence. |
They found replacing it with a ranked evaluation to be more suitable. | 0 | To summarize, we provided: The performance of the baseline system is similar to the best submissions in last year’s shared task. |
Here we present two algorithms. | 0 | In the named entity task, X1 might be the instance space for the spelling features, X2 might be the instance space for the contextual features. |
The approach has been successfully tested on the 8 000-word Verbmobil task. | 0 | In the following, we assume that this word joining has been carried out. |
In this paper, Ben and Riloff present a coreference resolver called BABAR that focuses on the use of contextual-role knowledge for coreference resolution. | 0 | 1. |
In this paper, Ben and Riloff present a coreference resolver called BABAR that focuses on the use of contextual-role knowledge for coreference resolution. | 0 | The belief value that would have been assigned to the intersection of these sets is .60*.70=.42, but this belief has nowhere to go because the null set is not permissible in the model.7 So this probability mass (.42) has to be redistributed. |
We chose one of four labels for each example: location, person, organization, or noise where the noise category was used for items that were outside the three categories. | 0 | We then discuss how we adapt and generalize a boosting algorithm, AdaBoost, to the problem of named entity classification. |
We chose one of four labels for each example: location, person, organization, or noise where the noise category was used for items that were outside the three categories. | 0 | (Brin 98) ,describes a system for extracting (author, book-title) pairs from the World Wide Web using an approach that bootstraps from an initial seed set of examples. |
They believe that global context is useful in most languages, as it is a natural tendency for authors to use abbreviations on entities already mentioned previously. | 0 | Ltd., then organization will be more probable. |
For the perplexity and translation tasks, we used SRILM to build a 5-gram English language model on 834 million tokens from Europarl v6 (Koehn, 2005) and the 2011 Workshop on Machine Translation News Crawl corpus with duplicate lines removed. | 0 | To conserve memory at the expense of accuracy, values may be quantized using q bits per probability and r bits per backoff6. |
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs. | 0 | In our coreference resolver, we define θ to be the set of all candidate antecedents for an anaphor. |
One can trivially create situations in which strictly binary-branching trees are combined to create a tree with only the root node and the terminal nodes, a completely flat structure. | 0 | The computation of Pfr1(c)1Mi M k (C)) has been sketched before in Equations 1 through 4. |
This paper talks about KenLM: Faster and Smaller Language Model Queries. | 0 | Given a key k, it estimates the position If the estimate is exact (A[pivot] = k), then the algorithm terminates succesfully. |
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models. | 0 | The only way to handle such phenomena within the framework described here is simply to expand out the reduplicated forms beforehand, and incorporate the expanded forms into the lexical transducer. |
The departure from the traditional token-based tagging approach allow them to explicitly capture type-level distributional properties of valid POS tag assignments as part of the model. | 0 | 3 58.3 40. |
The TRIE model uses less memory than the smallest lossless alternative and is still faster than SRILM. | 0 | This paper presents methods to query N-gram language models, minimizing time and space costs. |
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing. | 0 | 73 81. |
This paper discusses the Potsdam Commentary Corpus, a corpus of german assembeled by potsdam university. | 0 | We thus decided to pay specific attention to them and introduce an annotation layer for connectives and their scopes. |
Finally, several coreference systems have successfully incorporated anaphoricity determination modules. | 0 | To resolve the anaphor, we survey the final belief values assigned to each candidateâs singleton set. |
The AdaBoost algorithm was developed for supervised learning. | 0 | We again adopt an approach where we alternate between two classifiers: one classifier is modified while the other remains fixed. |
Vijay-Shankar et all considered the structural descriptions produced by various grammatical formalisms in terms of the complexity of the paths and the relationship between paths in the sets of structural descriptions that each system can generate. | 0 | In this paper, our goal has been to use the notion of LCFRS's to classify grammatical systems on the basis of their strong generative capacity. |
Finally, several coreference systems have successfully incorporated anaphoricity determination modules. | 0 | Tables 4 and 5 also show that putting all of the contextual role KSs in play at the same time produces the greatest performance gain. |
In order to realize the full potential of dependency-based syntactic parsing, it is desirable to allow non-projective dependency structures. | 0 | In approaching this problem, a variety of different methods are conceivable, including a more or less sophisticated use of machine learning. |
There is no global pruning. | 0 | The advantage is that we can recombine search hypotheses by dynamic programming. |
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration. | 0 | For the seen word ir, 'gen erals,' there is an c:NC transduction from to the node preceding ir,; this arc has cost cost( f,) - cost(unseen(f,)), so that the cost of the whole path is the desired cost( f,). |
They plan on extending instance-weighting to other standard SMT components and capture the degree of generality of phrase pairs. | 0 | First, we aim to explicitly characterize examples from OUT as belonging to general language or not. |
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration. | 0 | Church and Hanks [1989]), and we have used lists of character pairs ranked by mutual information to expand our own dictionary. |
Using less training data than other systems, their NER can perform as well as other state-of-the-art NERs. | 0 | We have shown that the maximum entropy framework is able to use global information directly. |
The corpus was annoted with different linguitic information. | 0 | 3.4 Salience-based text generation. |
Their results suggested that it was possible to learn accurate POS taggers for languages which did not have any annotated data, but have translations into a resource-rich language. | 0 | Based on these high-confidence alignments we can extract tuples of the form [u H v], where u is a foreign trigram type, whose middle word aligns to an English word type v. Our bilingual similarity function then sets the edge weights in proportion to these tuple counts. |
A beam search concept is applied as in speech recognition. | 0 | The complexity of the quasimonotone search is O(E3 J (R2+LR)). |
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement. | 0 | A related point is that mutual information is helpful in augmenting existing electronic dictionaries, (cf. |
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement. | 0 | "c' 0 + 0 "0 ' ⢠+ a n t i g r e e d y x g r e e d y < > c u r r e n t m e t h o d o d i e t . o n l y ⢠Taiwan 0 ·;; 0 c CD E i5 0"' 9 9 ⢠Mainland ⢠⢠⢠⢠-0.30.20.1 0.0 0.1 0.2 Dimension 1 (62%) Figure 7 Classical metric multidimensional scaling of distance matrix, showing the two most significant dimensions. |
The authors show that PATB is similar to other tree-banks but that annotation consistency remains low. | 0 | Test set OOV rate is computed using the following splits: ATB (Chiang et al., 2006); CTB6 (Huang and Harper, 2009); Negra (Dubey and Keller, 2003); English, sections 221 (train) and section 23 (test). |
Their empirical results demonstrate that the type-based tagger rivals state-of-the-art tag-level taggers which employ more sophisticated learning mechanisms to exploit similar constraints. | 0 | This departure from the traditional token-based tagging approach allows us to explicitly capture type- level distributional properties of valid POS tag as signments as part of the model. |
The departure from the traditional token-based tagging approach allow them to explicitly capture type-level distributional properties of valid POS tag assignments as part of the model. | 0 | See Table 2 for the tag set size of other languages. |
The resulting model is compact, efficiently learnable and linguistically expressive. | 0 | The terms on the right-hand-side denote the type-level and token-level probability terms respectively. |
This paper offers a broad insight into of Arabic constituency parsing by analyzing the interplay of linguistic phenomena, annotation choices, and model design. | 0 | A cell in the bottom row of the parse chart is required for each potential whitespace boundary. |
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration. | 0 | In that work, mutual information was used to decide whether to group adjacent hanzi into two-hanzi words. |
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models. | 0 | The performance of our system on those sentences ap peared rather better than theirs. |
A beam search concept is applied as in speech recognition. | 0 | The alignment model uses two kinds of parameters: alignment probabilities p(aj jajô1; I; J), where the probability of alignment aj for position j depends on the previous alignment position ajô1 (Ney et al., 2000) and lexicon probabilities p(fj jeaj ). |
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text. | 0 | Two issues distinguish the various proposals. |
This assumption, however, is not inherent to type-based tagging models. | 0 | 2 62.6 45. |
In this work, they take a more direct approach and treat a word type and its allowed POS tags as a primary element of the model. | 0 | 8 1 8. |
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models. | 0 | The cost estimate, cost(i¥JJ1l.fn is computed in the obvious way by summing the negative log probabilities of i¥JJ1l. |
This paper presents methods to query N-gram language models, minimizing time and space costs. | 0 | Therefore, for n-gram wn1 , all leftward extensions wn0 are an adjacent block in the n + 1-gram array. |
Their method did not assume any knowledge about the target language, making it applicable to a wide array of resource-poor languages. | 0 | For all languages, the vocabulary sizes increase by several thousand words. |
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance. | 0 | It also incorporates the Good-Turing method (Baayen 1989; Church and Gale 1991) in estimating the likelihoods of previously unseen con structions, including morphological derivatives and personal names. |
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing. | 0 | For all other recursive NPs, we add a common annotation to the POS tag of the head (recursiveNPHead). |
This corpus has several advantages: it is annotated at different levels. | 0 | Besides information structure, the second main goal is to enhance current models of rhetorical structure. |
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing. | 0 | Arabic sentences of up to length 63 would need to be. |
The manual evaluation of scoring translation on a graded scale from 1â5 seems to be very hard to perform. | 0 | Often, two systems can not be distinguished with a confidence of over 95%, so there are ranked the same. |
Das and Petrov, in this paper, approached inducing unsupervised part-of-speech taggers for languages that had no labeled training data, but had translated text in a resource-rich language. | 0 | While it is possible to derive a closed form solution for this convex objective function, it would require the inversion of a matrix of order |Vf|. |
The approach assumes that the word reordering is restricted to a few positions in the source sentence. | 0 | Starting from a DP-based solution to the traveling salesman problem, we present a novel technique to restrict the possible word reordering between source and target language in order to achieve an eÃcient search algorithm. |
They have made use of local and global features to deal with the instances of same token in a document. | 0 | Such sequences are given additional features of A begin, A continue, or A end, and the acronym is given a feature A unique. |
In this paper, Ben and Riloff present a coreference resolver called BABAR that focuses on the use of contextual-role knowledge for coreference resolution. | 0 | For example, a story can mention âthe FBIâ, âthe White Houseâ, or âthe weatherâ without any prior referent in the story. |
Two general approaches are presented and two combination techniques are described for each approach. | 0 | Since our goal is to perform well under these measures we will similarly treat constituents as the minimal substructures for combination. |
This assumption, however, is not inherent to type-based tagging models. | 0 | 1 1 0. |
Their method did not assume any knowledge about the target language, making it applicable to a wide array of resource-poor languages. | 0 | We were intentionally lenient with our baselines: bilingual information by projecting POS tags directly across alignments in the parallel data. |
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers. | 0 | For that application, at a minimum, one would want to know the phonological word boundaries. |
An extended lexicon model is defined, and its likelihood is compared to a baseline lexicon model, which takes only single-word dependencies into account. | 0 | (f1; ;mg ; l) 2 (f1; ;mg n fl; l1g ; l0) ! |
In this work, they take a more direct approach and treat a word type and its allowed POS tags as a primary element of the model. | 0 | 0 70.9 42. |
These clusters are computed using an SVD variant without relying on transitional structure. | 0 | 9 65.5 46. |
All the texts were annotated by two people. | 0 | As already pointed out in Section 2.4, current theories diverge not only on the number and definition of relations but also on apects of structure, i.e., whether a tree is sufficient as a representational device or general graphs are required (and if so, whether any restrictions can be placed on these graphâs structures â cf. |
In this paper, Ben and Riloff present a coreference resolver called BABAR that focuses on the use of contextual-role knowledge for coreference resolution. | 0 | We suspect that the higher precision in the disasters domain may be due to its substantially larger training corpus. |
They have made use of local and global features to deal with the instances of same token in a document. | 0 | The system described in this paper is similar to the MENE system of (Borthwick, 1999). |
The authors in this paper describe a search procedure for statistical machine translation (MT) based on dynamic programming (DP). | 0 | On the other hand, only very restricted reorderings are necessary, e.g. for the translation direction from Table 2: Coverage set hypothesis extensions for the IBM reordering. |
The second algorithm builds on a boosting algorithm called AdaBoost. | 0 | Many statistical or machine-learning approaches for natural language problems require a relatively large amount of supervision, in the form of labeled training examples. |
Their results show that their high performance NER use less training data than other systems. | 0 | For example: McCann initiated a new global system. |
The corpus was annoted with different linguitic information. | 0 | The general idea for the knowledge- based part is to have the system use as much information as it can find at its disposal to produce a target representation as specific as possible and as underspecified as necessary. |
They believe that global context is useful in most languages, as it is a natural tendency for authors to use abbreviations on entities already mentioned previously. | 0 | As far as we know, no other NERs have used information from the whole document (global) as well as information within the same sentence (local) in one framework. |
A large number of rules are needed for coverage of the domain, suggesting that a fairly large number of labeled examples should be required to train a classifier. | 0 | It may be more realistic to replace the second criteria with a softer one, for example (Blum and Mitchell 98) suggest the alternative Alternatively, if Ii and 12 are probabilistic learners, it might make sense to encode the second constraint as one of minimizing some measure of the distance between the distributions given by the two learners. |
The use of global features has shown excellent result in the performance on MUC-6 and MUC-7 test data. | 0 | The use of global features has improved the performance on MUC6 test data from 90.75% to 93.27% (27% reduction in errors), and the performance on MUC7 test data from 85.22% to 87.24% (14% reduction in errors). |
It outperformed strong unsupervised baselines as well as approaches that relied on direct projections, and bridged the gap between purely supervised and unsupervised POS tagging models. | 0 | In graph-based learning approaches one constructs a graph whose vertices are labeled and unlabeled examples, and whose weighted edges encode the degree to which the examples they link have the same label (Zhu et al., 2003). |
Each out-of-domain phrase pair was characterized by a set of simple features intended to reflect how useful it would be. | 0 | Linear weights are difficult to incorporate into the standard MERT procedure because they are “hidden” within a top-level probability that represents the linear combination.1 Following previous work (Foster and Kuhn, 2007), we circumvent this problem by choosing weights to optimize corpus loglikelihood, which is roughly speaking the training criterion used by the LM and TM themselves. |
They proposed a single joint model for performing both morphological segmentation and syntactic disambiguation which bypasses the associated circularity. | 0 | Finally, model GTv = 2 includes parent annotation on top of the various state-splits, as is done also in (Tsarfaty and Sima’an, 2007; Cohen and Smith, 2007). |
The authors believe automatic paraphrase discovery is an important component for building a fully automatic information extraction system. | 0 | The first stage identifies a keyword in each phrase and joins phrases with the same keyword into sets. |
Their results show that their high performance NER use less training data than other systems. | 0 | Unique Occurrences and Zone (UNIQ): This group of features indicates whether the word is unique in the whole document. |
This paper discusses the Potsdam Commentary Corpus, a corpus of german assembeled by potsdam university. | 0 | 7 www.cis.upenn.edu/â¼pdtb/ 8 www.eml-research.de/english/Research/NLP/ Downloads had to buy a new car. |
The three parsers were trained and tuned by their creators on various sections of the WSJ portion of the Penn Treebank. | 0 | This drastic tree manipulation is not appropriate for situations in which we want to assign particular structures to sentences. |
This paper talks about Exploiting Diversity in Natural Language Processing: Combining Parsers. | 0 | We call this technique constituent voting. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.