source_text
stringlengths 27
368
| label
int64 0
1
| target_text
stringlengths 1
5.38k
|
---|---|---|
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing. | 0 | Word Head Of Complement POS 1 '01 inna âIndeed, trulyâ VP Noun VBP 2 '01 anna âThatâ SBAR Noun IN 3 01 in âIfâ SBAR Verb IN 4 01 an âtoâ SBAR Verb IN Table 1: Diacritized particles and pseudo-verbs that, after orthographic normalization, have the equivalent surface form 0 an. |
They believe that global context is useful in most languages, as it is a natural tendency for authors to use abbreviations on entities already mentioned previously. | 0 | For example, if is found in the list of person first names, the feature PersonFirstName is set to 1. |
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers. | 0 | Foreign names are usually transliterated using hanzi whose sequential pronunciation mimics the source language pronunciation of the name. |
Using a treebank grammar, the data-driven lexicon, and a linguistically motivated unknown-tokens handling technique their model outperformed previous pipelined, integrated or factorized systems for Hebrew morphological and syntactic processing, yielding an error reduction of 12% over the best published results so far. | 0 | Handling Unknown tokens When handling unknown tokens in a language such as Hebrew various important aspects have to be borne in mind. |
One can trivially create situations in which strictly binary-branching trees are combined to create a tree with only the root node and the terminal nodes, a completely flat structure. | 0 | Because we are working with only three parsers, the only situation in which context will help us is when it can indicate we should choose to believe a single parser that disagrees with the majority hypothesis instead of the majority hypothesis itself. |
They showed that it was useful to abstract away from the details of the formalism, and examine the nature of their derivation process as reflected by properties their trees, find that several of the formalisms considered can be seen as being closely related since they have derivation tree sets with the same structure as those produced by Context-Free Grammars. | 0 | As a result, CFG's can not provide the structural descriptions in which there are nested dependencies between symbols labelling a path. |
This paper presents methods to query N-gram language models, minimizing time and space costs. | 0 | However, reads in the TRIE data structure are more expensive due to bit-level packing, so we found that it is faster to use interpolation search the entire time. |
They showed that it was useful to abstract away from the details of the formalism, and examine the nature of their derivation process as reflected by properties their trees, find that several of the formalisms considered can be seen as being closely related since they have derivation tree sets with the same structure as those produced by Context-Free Grammars. | 0 | In this paper, our goal has been to use the notion of LCFRS's to classify grammatical systems on the basis of their strong generative capacity. |
This paper talks about Unsupervised Models for Named Entity Classification. | 0 | Formally, let el (62) be the number of classification errors of the first (second) learner on the training data, and let Eco be the number of unlabeled examples on which the two classifiers disagree. |
They employed a PCFG-based generative framework to make both syntactic and morphological disambiguation decisions which were not only theoretically clean and linguistically justified but also probabilistically appropriate and empirically sound. | 0 | Such ambiguities cause discrepancies between token boundaries (indexed as white spaces) and constituent boundaries (imposed by syntactic categories) with respect to a surface form. |
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement. | 0 | The second concerns the methods used (if any) to ex tend the lexicon beyond the static list of entries provided by the machine-readable dictionary upon which it is based. |
The approach has been successfully tested on the 8 000-word Verbmobil task. | 0 | Otherwise for the predecessor search hypothesis, we would have chosen a position that would not have been among the first n uncovered positions. |
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers. | 0 | Note also that the costs currently used in the system are actually string costs, rather than word costs. |
This topic has been getting more attention, driven by the needs of various NLP applications. | 0 | The sentences in the corpus were tagged by a transformation-based chunker and an NE tagger. |
In this paper, the authors are of the opinion that the sequence models-based approaches usually treat token-level tag assignment as the primary latent variable. | 0 | Feature-based HMM Model (Berg- Kirkpatrick et al., 2010): The KM model uses a variety of orthographic features and employs the EM or LBFGS optimization algorithm; Posterior regulariation model (Grac¸a et al., 2009): The G10 model uses the posterior regular- ization approach to ensure tag sparsity constraint. |
The problem of coreference resolution has received considerable attention, including theoretical discourse models and supervised machine learning systems. | 0 | Sc opi ng filters candidate if outside the anaphorâs scope. |
The problem of coreference resolution has received considerable attention, including theoretical discourse models and supervised machine learning systems. | 0 | If two nouns have mutually exclusive semantic classes, then they cannot be coreferent. |
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text. | 0 | 0 Figure 5 An example of affixation: the plural affix. |
It is probably the first analysis of Arabic parsing of this kind. | 0 | By establishing significantly higher parsing baselines, we have shown that Arabic parsing performance is not as poor as previously thought, but remains much lower than English. |
Replacing this with a ranked evaluation seems to be more suitable. | 0 | We are therefore applying a different method, which has been used at the 2005 DARPA/NIST evaluation. |
Nevertheless, only a part of this corpus (10 texts), which the authors name "core corpus", is annotated with all this information. | 0 | There are still some open issues to be resolved with the format, but it represents a first step. |
Due to many similarly performing systems, the author was not able to draw strong conclusions on the question of correlation of manual and automatic evaluation metrics. | 0 | The out-of-domain test set differs from the Europarl data in various ways. |
A beam search concept is applied as in speech recognition. | 0 | Using these states, we define partial hypothesis extensions, which are of the following type: (S0;C n fjg; j0) ! |
An extended lexicon model is defined, and its likelihood is compared to a baseline lexicon model, which takes only single-word dependencies into account. | 0 | 13. |
The departure from the traditional token-based tagging approach allow them to explicitly capture type-level distributional properties of valid POS tag assignments as part of the model. | 0 | The second row represents the performance of the median hyperparameter setting. |
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs. | 0 | Each knowledge source then assigns a probability estimate to each candidate, which represents its belief that the candidate is the antecedent for the anaphor. |
Combining multiple highly-accurate independent parsers yields promising results. | 0 | Furthermore, we know one of the original parses will be the hypothesized parse, so the direct method of determining which one is best is to compute the probability of each of the candidate parses using the probabilistic model we developed in Section 2.1. |
An extended lexicon model is defined, and its likelihood is compared to a baseline lexicon model, which takes only single-word dependencies into account. | 0 | 3.1 Word ReOrdering with Verbgroup. |
From a DP-based solution to the traveling salesman problem, they present a novel technique to restrict the possible word reordering between source and target language in order to achieve an eÃcient search algorithm. | 0 | A search restriction especially useful for the translation direction from German to English is presented. |
We chose one of four labels for each example: location, person, organization, or noise where the noise category was used for items that were outside the three categories. | 0 | The second algorithm builds on a boosting algorithm called AdaBoost (Freund and Schapire 97; Schapire and Singer 98). |
A beam search concept is applied as in speech recognition. | 0 | The advantage is that we can recombine search hypotheses by dynamic programming. |
This paper talks about Unsupervised Models for Named Entity Classification. | 0 | (e.g., N.Y. would contribute this feature, IBM would not). nonalpha=x Appears if the spelling contains any characters other than upper or lower case letters. |
They believe that global context is useful in most languages, as it is a natural tendency for authors to use abbreviations on entities already mentioned previously. | 0 | Even with less training data, MENERGI outperforms Borthwick' s MENE + reference resolution (Borthwick, 1999). |
This paper talks about KenLM: Faster and Smaller Language Model Queries. | 0 | We have described two data structures for language modeling that achieve substantial reductions in time and memory cost. |
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement. | 0 | Several systems propose statistical methods for handling unknown words (Chang et al. 1992; Lin, Chiang, and Su 1993; Peng and Chang 1993). |
Throughout this paper we compare with several packages: SRILM 1.5.12 (Stolcke, 2002) is a popular toolkit based on tries used in several decoders. | 0 | While we have minimized forward-looking state in Section 4.1, machine translation systems could also benefit by minimizing backward-looking state. |
Using less training data than other systems, their NER can perform as well as other state-of-the-art NERs. | 0 | For corporate suffixes, a list of tokens cslist that occur frequently as the last token of an organization name is collected from the training data. |
The code is open source, has minimal dependencies, and offers both C++ and Java interfaces for integration. | 0 | Backoff-smoothed models estimate this probability based on the observed entry with longest matching history wnf , returning where the probability p(wn|wn−1 f ) and backoff penalties b(wn−1 i ) are given by an already-estimated model. |
It is probably the first analysis of Arabic parsing of this kind. | 0 | Manual annotation results in human in- terpretable grammars that can inform future tree- bank annotation decisions. |
All the texts were annotated by two people. | 0 | annotation guidelines that tell annotators what to do in case of doubt. |
This paper presents methods to query N-gram language models, minimizing time and space costs. | 0 | Further, it needs extra pointers in the trie, increasing model size by 40%. |
A beam search concept is applied as in speech recognition. | 0 | Two subjects are each given a calendar and they are asked to schedule a meeting. |
The TRIE model uses less memory than the smallest lossless alternative and is still faster than SRILM. | 0 | We implement two data structures: PROBING, designed for speed, and TRIE, optimized for memory. |
A large number of rules are needed for coverage of the domain, suggesting that a fairly large number of labeled examples should be required to train a classifier. | 0 | The key to the methods we describe is redundancy in the unlabeled data. |
It outperformed strong unsupervised baselines as well as approaches that relied on direct projections, and bridged the gap between purely supervised and unsupervised POS tagging models. | 0 | For seven out of eight languages a threshold of 0.2 gave the best results for our final model, which indicates that for languages without any validation set, r = 0.2 can be used. |
They plan on extending instance-weighting to other standard SMT components and capture the degree of generality of phrase pairs. | 0 | The 1st block contains the simple baselines from section 2.1. |
Das and Petrov, in this paper, approached inducing unsupervised part-of-speech taggers for languages that had no labeled training data, but had translated text in a resource-rich language. | 0 | Altun et al. (2005) proposed a technique that uses graph based similarity between labeled and unlabeled parts of structured data in a discriminative framework for semi-supervised learning. |
This paper presents a maximum entropy-based named entity recognizer (NER). | 0 | Otherwise, it is set to 0. |
The fact that projective dependency parsers can never exactly reproduce the analyses found in non-projective tree banks is often neglected because of the relative scarcity of problematic constructions. | 0 | Before we turn to the evaluation, however, we need to introduce the data-driven dependency parser used in the latter experiments. |
They incorporated instance-weighting into a mixture-model framework, and found that it yielded consistent improvements over a wide range of baselines. | 0 | For the LM, adaptive weights are set as follows: where α is a weight vector containing an element αi for each domain (just IN and OUT in our case), pi are the corresponding domain-specific models, and ˜p(w, h) is an empirical distribution from a targetlanguage training corpus—we used the IN dev set for this. |
The PROBING data structure uses linear probing hash tables and is designed for speed. | 0 | Our implementation permits jumping to any n-gram of any length with a single lookup; this appears to be unique among language model implementations. |
The contextual rules are restricted and may not be applicable to every example, but the spelling rules are generally applicable and should have good coverage. | 0 | The likelihood of the observed data under the model is where P(yi, xi) is defined as in (9). |
Using a treebank grammar, the data-driven lexicon, and a linguistically motivated unknown-tokens handling technique their model outperformed previous pipelined, integrated or factorized systems for Hebrew morphological and syntactic processing, yielding an error reduction of 12% over the best published results so far. | 0 | Note that in our construction arcs can never cross token boundaries. |
The PROBING data structure uses linear probing hash tables and is designed for speed. | 0 | Further, it needs extra pointers in the trie, increasing model size by 40%. |
The first method builds on results from (Yarowsky 95) and (Blum and Mitchell 98). | 0 | We first define "pseudo-labels",-yt, as follows: = Yi t sign(g 0\ 2— kx2,m < i < n Thus the first m labels are simply copied from the labeled examples, while the remaining (n — m) examples are taken as the current output of the second classifier. |
The authors in this paper describe a search procedure for statistical machine translation (MT) based on dynamic programming (DP). | 0 | A position is presented by the word at that position. |
Their results show that their high performance NER use less training data than other systems. | 0 | We have estimated the performance of IdentiFinder ' 99 at 200K words of training data from the graphs. |
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs. | 0 | Second, BABAR performs reliable case resolution to identify anaphora that can be easily resolved using the lexical and syntactic heuristics described in Section 2.1. |
This topic has been getting more attention, driven by the needs of various NLP applications. | 0 | Notice that the CC-domain is a special case. |
They showed better grammars to improve performance on both morphological and syntactic tasks, providing support for the advantage of a joint framework over pipelined or factorized ones. | 0 | A morphological analyzer M : W—* L is a function mapping sentences in Hebrew (W E W) to their corresponding lattices (M(W) = L E L). |
The authors in this paper describe a search procedure for statistical machine translation (MT) based on dynamic programming (DP). | 0 | (f1; ;mg n fl1; l2g ; l) 4 (f1; ;m ô 1g n fl1; l2; l3g ; l0) ! |
Evaluation was done automatically using the BLEU score and manually on fluency and adequacy. | 0 | Systems that generally do worse than others will receive a negative one. |
BABAR's performance in both domains of terrorism and natural disaster, and the contextual-role knowledge in pronouns have shown successful results. | 0 | The contextual role knowledge that BABAR uses for coreference resolution is derived from this caseframe data. |
This topic has been getting more attention, driven by the needs of various NLP applications. | 0 | Out of those 15 links, 4 are errors, namely âbuy - payâ, âacquire - payâ, âpurchase - stakeâ âacquisition - stakeâ. |
Vijay-Shankar et all considered the structural descriptions produced by various grammatical formalisms in terms of the complexity of the paths and the relationship between paths in the sets of structural descriptions that each system can generate. | 0 | CHARACTERIZING STRUCTURAL DESCRIPTIONS PRODUCED BY VARIOUS GRAMMATICAL FORMALISMS* |
The main result is that the combined system can recover non-projective dependencies with a precision sufficient to give a significant improvement in overall parsing accuracy, especially with respect to the exact match criterion, leading to the best reported performance for robust non-projective parsing of Czech. | 0 | We show how a datadriven deterministic dependency parser, in itself restricted to projective structures, can be combined with graph transformation techniques to produce non-projective structures. |
Replacing this with a ranked evaluation seems to be more suitable. | 0 | The domain is general politics, economics and science. |
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models. | 0 | We evaluate the system's performance by comparing its segmentation 'Tudgments" with the judgments of a pool of human segmenters, and the system is shown to perform quite well. |
They found replacing it with a ranked evaluation to be more suitable. | 1 | Replacing this with an ranked evaluation seems to be more suitable. |
Using a treebank grammar, the data-driven lexicon, and a linguistically motivated unknown-tokens handling technique their model outperformed previous pipelined, integrated or factorized systems for Hebrew morphological and syntactic processing, yielding an error reduction of 12% over the best published results so far. | 0 | The treebank has two versions, v1.0 and v2.0, containing 5001 and 6501 sentences respectively. |
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names. | 0 | This WFST represents the segmentation of the text into the words AB and CD, word boundaries being marked by arcs mapping between f and part-of-speech labels. |
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration. | 0 | Of course, since the number of attested (phonemic) Mandarin syllables (roughly 1400, including tonal distinctions) is far smaller than the number of morphemes, it follows that a given syllable could in principle be written with any of several different hanzi, depending upon which morpheme is intended: the syllable zhongl could be lfl 'middle,''clock,''end,' or ,'loyal.' |
Each out-of-domain phrase pair was characterized by a set of simple features intended to reflect how useful it would be. | 0 | This is a simple and effective alternative to setting weights discriminatively to maximize a metric such as BLEU. |
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs. | 0 | The SynRole KS computes the relative frequency with which the candidatesâ syntactic role (subject, direct object, PP object) appeared in resolutions in the training set. |
Human judges also pointed out difficulties with the evaluation of long sentences. | 0 | At the very least, we are creating a data resource (the manual annotations) that may the basis of future research in evaluation metrics. |
The bias of automatic methods in favour of statistical systems seemed to be less pronounced on out-of-domain test data. | 0 | Microsoft’s approach uses dependency trees, others use hierarchical phrase models. |
Their empirical results demonstrate that the type-based tagger rivals state-of-the-art tag-level taggers which employ more sophisticated learning mechanisms to exploit similar constraints. | 0 | See Table 2 for the tag set size of other languages. |
The resulting model is compact, efficiently learnable and linguistically expressive. | 0 | (2009). |
They have made use of local and global features to deal with the instances of same token in a document. | 0 | The local features used are similar to those used in BBN' s IdentiFinder (Bikel et al., 1999) or MENE (Borthwick, 1999). |
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs. | 0 | Table 1: Syntactic Seeding Heuristics BABARâs reliable case resolution heuristics produced a substantial set of anaphor/antecedent resolutions that will be the training data used to learn contextual role knowledge. |
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement. | 0 | Methods for expanding the dictionary include, of course, morphological rules, rules for segmenting personal names, as well as numeral sequences, expressions for dates, and so forth (Chen and Liu 1992; Wang, Li, and Chang 1992; Chang and Chen 1993; Nie, Jin, and Hannan 1994). |
The resulting model is compact, efficiently learnable and linguistically expressive. | 0 | Specifically, we assume each word type W consists of feature-value pairs (f, v). |
However, using the top-level semantic classes of WordNet proved to be problematic as the class distinctions are too coarse. | 0 | We tagged each noun with the top-level semantic classes assigned to it in Word- Net. |
Through parser combination we have reduced the precision error rate by 30% and the recall error rate by 6% compared to the best previously published result. | 0 | One hybridization strategy is to let the parsers vote on constituents' membership in the hypothesized set. |
It outperformed strong unsupervised baselines as well as approaches that relied on direct projections, and bridged the gap between purely supervised and unsupervised POS tagging models. | 0 | However, we needed to restrict ourselves to these languages in order to be able to evaluate the performance of our approach. |
Two general approaches are presented and two combination techniques are described for each approach. | 0 | The first row represents the average accuracy of the three parsers we combine. |
Most IE researchers have been creating paraphrase knowledge by hand and specific tasks. | 0 | For example, from the sentence âMr. |
The authors believe automatic paraphrase discovery is an important component for building a fully automatic information extraction system. | 0 | Although this is not a precise criterion, most cases we evaluated were relatively clear-cut. |
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text. | 0 | In a more recent study than Chang et al., Wang, Li, and Chang (1992) propose a surname-driven, non-stochastic, rule-based system for identifying personal names.17 Wang, Li, and Chang also compare their performance with Chang et al.'s system. |
This paper offers a broad insight into of Arabic constituency parsing by analyzing the interplay of linguistic phenomena, annotation choices, and model design. | 0 | MADA uses an ensemble of SVMs to first re-rank the output of a deterministic morphological analyzer. |
From a DP-based solution to the traveling salesman problem, they present a novel technique to restrict the possible word reordering between source and target language in order to achieve an eÃcient search algorithm. | 0 | The model is often further restricted so that each source word is assigned to exactly one target word (Brown et al., 1993; Ney et al., 2000). |
Here we show how non-projective dependency parsing can be achieved by combining a data driven projective parser with special graph transformation techniques. | 0 | Table 5 shows the overall parsing accuracy attained with the three different encoding schemes, compared to the baseline (no special arc labels) and to training directly on non-projective dependency graphs. |
They showed that it was useful to abstract away from the details of the formalism, and examine the nature of their derivation process as reflected by properties their trees, find that several of the formalisms considered can be seen as being closely related since they have derivation tree sets with the same structure as those produced by Context-Free Grammars. | 0 | Hence, trees shown in Figure 8 can not be generated by any MCTAG (but can be generated by an IG) because the number of pairs of dependent paths grows with n. Since the derivation tees of TAG's, MCTAG's, and HG's are local sets, the choice of the structure used at each point in a derivation in these systems does not depend on the context at that point within the derivation. |
The AdaBoost algorithm was developed for supervised learning. | 1 | The AdaBoost algorithm was developed for supervised learning. |
Das and Petrov, in this paper, approached inducing unsupervised part-of-speech taggers for languages that had no labeled training data, but had translated text in a resource-rich language. | 0 | The focus of this work is on building POS taggers for foreign languages, assuming that we have an English POS tagger and some parallel text between the two languages. |
The resulting model is compact, efficiently learnable and linguistically expressive. | 0 | 8 1 8. |
All the texts were annotated by two people. | 0 | The domains are the linguistic spans that are to receive an IS-partitioning, and the units are the (smaller) spans that can play a role as a constituent of such a partitioning. |
They incorporated instance-weighting into a mixture-model framework, and found that it yielded consistent improvements over a wide range of baselines. | 0 | However, when the linear LM is combined with a linear TM (lm+lin tm) or MAP TM (lm+map TM), the results are much better than a log-linear combination for the EMEA setting, and on a par for NIST. |
The TRIE model uses less memory than the smallest lossless alternative and is still faster than SRILM. | 0 | In this section, we extend state to optimize left-to-right queries. |
These clusters are computed using an SVD variant without relying on transitional structure. | 0 | The English data comes from the WSJ portion of the Penn Treebank and the other languages from the training set of the CoNLL-X multilingual dependency parsing shared task. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.